You are here

Bits from Debian

Subscribe to Feed Bits from Debian
Planet Debian - https://planet.debian.org/
Përditësimi: 1 ditë 14 orë më parë

Jonathan Dowland: PaperWM

Mër, 06/01/2021 - 5:44md

My PaperWM desktop, as I write this post.

Just before Christmas I decided to try out a GNOME extension I'd read about, PaperWM. It looked promising, but I was a little nervous about breaking my existing workflow, which was heavily reliant on the Put Windows extension.

It's great! I have had to carefully un-train some of my muscle memory but it seems to be worth it. It seems to strike a great balance between the rigidity of a tile-based window manager and a more traditional floating-windows one.

I'm always wary of coming to rely upon large extensions or plugins. The parent software is often quite hands-off about caring about supporting users of them, or breaking them by making API changes. Certainly those Firefox users who were heavily dependent on plugins prior to the Quantum fire-break are still very, very angry. (I actually returned to Firefox at that point, so I avoided the pain, and enjoy the advantages of the re-architecture). PaperWM hopefully is large enough and popular enough to avoid that fate.

Russell Coker: Planet Linux Australia

Mër, 06/01/2021 - 12:45pd

Linux Australia have decided to cease running the Planet installation on planet.linux.org.au. I believe that blogging is still useful and a web page with a feed of Australian Linux blogs is a useful service. So I have started running a new Planet Linux Australia on https://planet.luv.asn.au/. There has been discussion about getting some sort of redirection from the old Linux Australia page, but they don’t seem able to do that.

If you have a blog that has a reasonable portion of Linux and FOSS content and is based in or connected to Australia then email me on russell at coker.com.au to get it added.

When I started running this I took the old list of feeds from planet.linux.org.au, deleted all blogs that didn’t have posts for 5 years and all blogs that were broken and had no recent posts. I emailed people who had recently broken blogs so they could fix them. It seems that many people who run personal blogs aren’t bothered by a bit of downtime.

As an aside I would be happy to setup the monitoring system I use to monitor any personal web site of a Linux person and notify them by Jabber or email of an outage. I could set it to not alert for a specified period (10 mins, 1 hour, whatever you like) so it doesn’t alert needlessly on routine sysadmin work and I could have it check SSL certificate validity as well as the basic page header.

Related posts:

  1. Planet Flooding One annoying thing that happens regularly is “Planet Flooding”. This...
  2. Other Planet LCA 2008 The Planet installation for the Linux.Conf.Au (the main Linux conference...
  3. Planet feed polling frequency From reading my web stats yesterday it seems that one...

Ben Hutchings: Debian LTS work, December 2020

Mar, 05/01/2021 - 11:32md

I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 9 hours from earlier months. I worked 16.5 hours this month, so I will carry over 8.5 hours to January. (Updated: corrected number of hours worked.)

I updated linux-4.19 to include the changes in the Debian 10.7 point release, uploaded the package, and issued DLA-2483-1 for this.

I picked some regression fixes from the Linux 4.9 stable branch to the linux package, and uploaded the package. This unfortunately failed to build on arm64 due to some upstream changes uncovering an old bug, so I made a second upload fixing that. I issued DLA-2494-1 for this.

I updated the linux packaging branch for stretch to Linux 4.9.249, but haven't made another package upload yet.

Bernd Zeimetz: Building reverse build dependencies in salsa CI

Mar, 05/01/2021 - 4:33md

For the next library soname bump of gpsd I needed to rebuild all reverse dependencies. As this is a task I have to do very often, I came up with some code to generate (and keep uptodate) an include for the gitlab CI. Right now it is rather uncommented, undocumented, but works well. If you like it, MRs are very welcome.

https://salsa.debian.org/bzed/reverse-dependency-ci/

The generated files are here:

https://bzed.pages.debian.net/reverse-dependency-ci/

Usage:

include: - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml - https://bzed.pages.debian.net/reverse-dependency-ci/gpsd.yml variables: SALSA_CI_ENABLE_REVERSE_DEPENDENCY_BUILD: 1

Please do no abuse the salsa CI. Don’t build all of your 100 reverse dependencies with every commit!

Steve Kemp: Brexit has come

Mar, 05/01/2021 - 9:45pd

Nothing too much has happened recently, largely as a result of the pandemic killing a lot of daily interests and habits.

However as a result of Brexit I'm having to do some paperwork, apparently I now need to register for permanent residency under the terms of the withdrawal agreement, and that will supersede the permanent residency I previously obtained.

Of course as a UK citizen I've now lost the previously-available freedom of movement. I can continue to reside here in Helsinki, Finland, indefinitely, but I cannot now move to any other random EU country.

It has crossed my mind, more than a few times, that I should attempt to achieve Finnish citizenship. As a legal resident of Finland the process is pretty simple, I just need two things:

  • Prove I've lived here for the requisite number of years.
  • Pass a language test.

Of course the latter requirement is hard, I can understand a lot of spoken and written Finnish, but writing myself, and speaking a lot is currently beyond me. I need to sit down and make the required effort to increase my fluency. There is the alternative option of learning Swedish, which is a hack a lot of immigrants use:

  • Learning Swedish is significantly easier for a native English-speaker.
  • But the downside is that it would be learning a language solely to "cheat" the test, it wouldn't actually be useful in my daily life.

Finland has two official languages, and so the banks, the medical world, the tax-office, etc, are obliged to provide service in both. However daily life, ordering food at restaurants, talking to parents in the local neighborhood? Finnish, or English are the only real options. So if I went this route I'd end up in a weird situation where I had to learn a language to pass a test, but then would continue to need to learn more Finnish to live my life. That seems crazy, unless I were desperate for a second citizenship which I don't think I am.

Learning Finnish has not yet been a priority, largely because I work in English in the IT-world, and of course when I first moved here I was working (remotely) for a UK company, and didn't have the time to attend lessons (because they were scheduled during daytime, on the basis that many immigrants are unemployed). Later we had a child, which meant that early-evening classes weren't a realistic option either.

(Of course I learned a lot of the obvious things immediately upon moving, things like numbers, names for food, days of the week were essential. Without those I couldn't have bought stuff in shops and would have starved!)

On the topic of languages a lot of people talk about how easy it is for children to pick up new languages, and while that is broadly true it is also worth remembering just how many years of correction and repetition they have to endure as part of the process.

For example we have a child, as noted already, he is spoken to by everybody in Finnish. I speak to him in English, and he hears his mother and myself speaking English. But basically he's 100% Finnish with the exception of:

  • Me, speaking English to him.
  • His mother and I speaking English in his hearing.
  • Watching Paw Patrol.

If he speaks Finnish to me I pretend to not understand him, even when I do, just for consistency. As a result of that I've heard him tell strangers "Daddy doesn't speak Finnish" (in Finnish) when we've been stopped and asked for directions. He also translates what some other children have said into English for my benefit which is adorable

Anyway he's four, and he's pretty amazing at speaking to everybody in the correct language - he's outgrown the phase where he'd mix different languages in the same sentence ("more leipä", "saisinko milk") - when I took him to the UK he surprised and impressed me by being able to understand a lot of the heavy/thick accents he'd never heard before. (I'll still need to train him on Rab C. Nesbitt when he's a wee bit older, but so far no worries.)

So children learn languages, easily and happily? Yes and no. I've spent nearly two years correcting his English and he still makes the same mistake with gender. It's not a big deal, at all, but it's a reminder that while children learn this stuff, they still don't do it as easily as people imagine. I'm trying to learn and if I'd been corrected for two years over the same basic point you'd rightly think I was "slow", but actually that's just how it works. Learning languages requires a hell of a lot of practice, a lot of effort, and a lot of feedback/corrections.

Specifically Finnish doesn't have gendered pronouns, the same word is used for "he" and "she". This leads to a lot of Finnish people, adults and children, getting the pronouns wrong in English. In the case of our child he'll say "Mommy is sleeping, when he wake up?" In the case of adults I've heard people say "My girlfriend is a doctor, he works in a hospital", or "My dad is an accountant, she works for a big firm". As I say I've spent around two years making this correction to the child, and he's still nowhere near getting it right. Kinda adorable actually:

  • "Mommy is a woman we say "when she wakes up"..."
  • "Adriana is a girl we say "her bike".."

Russ Allbery: New year haul

Mar, 05/01/2021 - 7:03pd

For once, I've already read and reviewed quite a few of these books.

Elizabeth Bear — Machine (sff)
Timothy Caulfield — Your Day, Your Way (non-fiction)
S.A. Chakraborty — The City of Brass (sff)
John Dickerson — The Hardest Job in the World (non-fiction)
Tracy Deonn — Legendborn (sff)
Lindsay Ellis — Axiom's End (sff)
Alix E. Harrow — The Once and Future Witches (sff)
TJ Klune — The House in the Cerulean Sea (sff)
Maria Konnikova — The Biggest Bluff (non-fiction)
Talia Levin — Culture Warlords (non-fiction)
Yoon Ha Lee — Phoenix Extravagent (sff)
Yoon Ha Lee, et al. — The Vela (sff)
Michael Lewis — Flash Boys (non-fiction)
Michael Lewis — Losers (non-fiction)
Michael Lewis — The Undoing Project (non-fiction)
Megan Lindholm — Wizard of the Pigeons (sff)
Nathan Lowell — Quarter Share (sff)
Adrienne Martini — Somebody's Gotta Do It (non-fiction)
Tamsyn Muir — Princess Florinda and the Forty-Flight Tower (sff)
Naomi Novik — A Deadly Education (sff)
Margaret Owen — The Merciful Crow (sff)
Anne Helen Peterson — Can't Even (non-fiction)
Devon Price — Laziness Does Not Exist (non-fiction)
The Secret Barrister — The Secret Barrister (non-fiction)
Studs Terkel — Working (non-fiction)
Kathi Weeks — The Problem with Work (non-fiction)
Reeves Wiedeman — Billion Dollar Loser (non-fiction)

Rather a lot of non-fiction in this batch, much more than usual. I've been in a non-fiction mood lately.

So many good things to read!

Iustin Pop: Year 2020 review

Hën, 04/01/2021 - 11:36md

Year 2020. What a year! Sure, already around early January there were rumours/noise about Covid-19, but who would have thought where it will end up! Thankfully, none of my close or extended family was directly (medically) affected by Covid, so I/we had a privileged year compared to so many other people.

I thought how to write a mini-summary, but prose is too difficult, so let’s just go month-by-month. Please note that my memory is fuzzy after 9 months cooked up in the apartment, so things could ±1 month compared to what I wrote.

Timeline January

Ski weekend. Skiing is awesome! Cancelling a US work trip since there will be more opportunities soon (har har!).

February

Ski vacation. Yep, skiing is awesome. Can’t wait for next season (har har!). Discussions about Covid start in the office, but more “is this scary or just interesting?” (yes, this was before casualties). Then things start escalating, work-from-home at least partially, etc. etc. Definitely not just “intersting” anymore.

In Garmin-speak, I got ~700+ “intensity minutes” in February (correlates with activity time, but depends on intensity of the effort whether 1:1 or 2 intensity minutes for one wall-clock minute).

March

Sometimes during the month, my workplace introduces mandatory WFH. I remember being the last person in our team in the office, on the last day we were allowed to work, and cleaning my desk/etc., thinking “all this, and we’ll be back in 3 weeks or so”. Har har!

I buy a webcam, just in case WFH gets extended. And start to increase my sports - getting double the intensity minutes (1500+).

April

Switzerland enters the first, hard, lockdown. Or was it late March? Not entirely sure, but in my mind March was the opening, and April was the first main course.

It is challenging, having to juggle family and work and stressed schedule, but also interesting. Looking back, I think I liked April the most, as people were actually careful at that time.

I continue upgrading my “home office” - new sound system, so that I don’t have to plug in/plug out cables.

1700+ intensity minutes this month.

May

Continued WFH, somewhat routine now. My then internet provider started sucking hard, so I upgrade with good results. I’m still happy, half a year later (quite happy, even).

Still going strong otherwise, but waiting for summer vacation, whatever it will be. A tiny bit more effort, so 1800 intensity minutes in May.

June

Switzerland relaxes the lock down, but not my company, so as the rest of the family goes out and about, I start feeling alone in the apartment. And somewhat angry at it, which impacts my sports (counter-intuitively), so I only get 1500 intensity minutes.

I go and buy a coffee machine—a real one, that takes beans and grinds them, so I get to enjoy the smell of freshly-ground coffee and the fun of learning about coffee beans, etc. But it occupies the time.

On the work/job front, I think at this time I finally got a workstation for home, instead of a laptop (which was ultra-portable too), so together with the coffee machine, it feels like a normal work environment. Well, modulo all the people. At least I’m not crying anymore every time I open a new tab in Chrome…

July

Situation is slowly going better, but no, not my company. Still mandatory WFH, with (if I recall correctly) one day per week allowed, and no meeting other people. I get angrier, but manage to channel my energy into sports, almost doubly my efforts in July - 2937 intensity minutes, not quite reaching the 3000 magic number.

I buy more stuff to clean and take care of my bicycles, which I don’t really use. So shopping therapy too.

August

The month starts with a one week family vacation, but I take a bike too, so I manage to put in some effort (it was quite nice riding TBH). A bit of changes in the personal life (nothing unexpected), which complicates things a bit, but at this moment I really thought Switzerland is going to continue to decrease in infections/R-factor/etc. so things will get back to normal, right? My company expands a bit the work-from-office part, so I’m optimistic.

Sports wise, still going strong, 2500 intensity minutes, preparing for the single race this year.

September

The personal life changes from August start to stabilise, so things become again routine, and I finally get to do a race. Life was good for an extended weekend (well, modulo race angst, but that’s part of the fun), and I feel justified to take it slow the week after the race. And the week after that too.

I end up the month with close, but not quite, 1900 intensity minutes.

October

October starts with school holidays and a one week family vacation, but I feel demotivated. Everything is closing down again (well, modulo schools), and I actually have difficulty getting re-adjusted to no longer being alone in the apartment during the work hours.

I only get ~1000 intensity minutes in October, mainly thanks to good late autumn weather and outside rides. And I start playing way more computer games. I also sell my PS4, hoping to get a PS5 next month.

November

November continues to suck. I think my vacation in October was actually detrimental - it broke my rhythm, I don’t really do sport anymore, not consistently at least, so I only get 700+ intensity minutes. And I keep playing computer games, even if I missed the PS5 ordering window; so I switch to PC gaming.

My home office feels very crowded, so as kind of anti-shopping therapy, I sell tons of smallish stuff; can’t believe how much crap I kept around while not really using it.

I also manage to update/refresh all my Debian packages, since next freeze approaches. Better than for previous releases, so it feels good.

December

December comes, end of the year, the much awaited vacation - which we decide to cancel due to the situation in whole of Switzerland (and neighbouring countries). I basically only play computer games, and get grand total of 345 activity minutes this month.

And since my weight is inversely correlated to my training, I’m basically back at my February weight, having lost all the gains I made during the year. I mean, having gained back all the fat I lost. Err, you know what I mean; I’m back close to my high-watermark, which is not good.

Conclusion

I was somehow hoping that the end of the year will allow me to reset and restart, but somehow - a few days into January - it doesn’t really feel so. My sleep schedule is totally ruined, my motivation is so-so, and I think the way I crashed in October was much harder/worse than I realised at the time, but in a way expected for this crazy year.

I have some projects for 2021 - or at least, I’m trying to make up a project list - in order to get a bit more structure in my continued “stuck inside the house” part, which is especially terrible when on-call. I don’t know how the next 3-6 months will evolve, but I’m thankful that so far, we are all healthy. Actually, me personally I’ve been healthier physically than in other years, due to less contact with other people.

On the other side, thinking of all the health-care workers, or even service workers, my IT job is comfy and all I am is a spoiled person (I could write many posts on specifically this topic). I really need to up my willpower and lower my spoil level. Hints are welcome :(

Wish everybody has a better year in 2021.

Jan Wagner: Backing up Windows (the hard way)

Hën, 04/01/2021 - 6:35md

Sometimes you need to do things you don't like and you don't know where you will end up.
In our household there exists one (production) system running Windows. Don't ask why and please no recommandations how to substitute it. Some things are hard to (ex)change, for example your love partner.

Looking into Backup with rsync on Windows (WSL) I needed to start a privileged powershell, so I first started an unprivileged one:

powershell

Just to start a privileged:

Start-Process powershell -Verb runAs

Now you can follow the Instructions from Microsoft to install OpenSSH. Or just install the OpenSSH Server:

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

Check if a firewall rule was created (maybe you want to adjust it):

Get-NetFirewallRule -Name *ssh*

Start the OpenSSH server:

Start-Service sshd

Running OpenSSH server as service:

Set-Service -Name sshd -StartupType 'Automatic'

You can create the .ssh directory with the correct permissions by connecting to localhost and creating the known_hosts file.

ssh user@127.0.0.1

When you intend to use public key authentication for users in the administrators group, have a look into How to Login Windows Using SSH Key Under Local Admin.

Indeed you can get rsync running via WSL. But why load tons of dependencies on your system? With the installation of rsync I cheated a bit and used chocolately by running choco install rsync, but there is also an issue requesting rsync support for the OpenSSH server which includes an archive with a rsync.exe and libraries which may also fit. You can place those files for example into C:\Windows\System32\OpenSSH so they are in the PATH.

So here we are. Now I can solve all my other issues with BackupPC, Windows firewall and the network challenges to get access to the isolated dedicated network of the windows system.

John Goerzen: More Topics on Store-And-Forward (Possibly Airgapped) ZFS and Non-ZFS Backups with NNCP

Hën, 04/01/2021 - 6:18md

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In my previous post, I introduced a way to use ZFS backups over NNCP. In this post, I’ll expand on that and also explore non-ZFS backups.

Use of nncp-file instead of nncp-exec

The previous example used nncp-exec (like UUCP’s uux), which lets you pipe stdin in, then queues up a request to run a given command with that input on a remote. I discussed that NNCP doesn’t guarantee order of execution, but that for the ZFS use case, that was fine since zfs receive would just fail (causing NNCP to try again later).

At present, nncp-exec stores the data piped to it in RAM before generating the outbound packet (the author plans to fix this shortly). That made it unusable for some of my backups, so I set it up another way: with nncp-file, the tool to transfer files to a remote machine. A cron job then picks them up and processes them.

On the machine being backed up, we have to find a way to encode the dataset to be received. I chose to do that as part of the filename, so the updated simplesnap-queue could look like this:

#!/bin/bash set -e set -o pipefail DEST="`echo $1 | sed 's,^tank/simplesnap/,,'`" FILE="bakfsfmt2-`date "+%s.%N".$$`_`echo "$DEST" | sed 's,/,@,g'`" echo "Processing $DEST to $FILE" >&2 # stdin piped to this zstd -8 - \ | gpg --compress-algo none --cipher-algo AES256 -e -r 012345... \ | su nncp -c "/usr/local/nncp/bin/nncp-file -nice B -noprogress - 'backupsvr:$FILE'" >&2 echo "Queued $DEST to $FILE" >&2

I’ve added compression and encryption here as well; more on that below.

On the backup server, we would define a different incoming directory for each node in nncp.hjson. For instance:

host1: { ... incoming: "/var/local/nncp-bakcups-incoming/host1" } host2: { ... incoming: "/var/local/nncp-backups-incoming/host2" }

I’ll present the scanning script in a bit.

Offsite Backup Rotation

Most of the time, you don’t want just a single drive to store the backups. You’d like to have a set. At minimum, one wouldn’t be plugged in so lightning wouldn’t ruin all your backups. But maybe you’d store a second drive at some other location you have access to (friend’s house, bank box, etc.)

There are several ways you could solve this:

  • If the remote machine is at a location with network access and you trust its physical security (remember that although it will store data encrypted at rest and will transport it encrypted, it will — in most cases — handle un-encrypted data during processing), you could of course send NNCP packets to it over the network at the same time you send them to your local backup system.
  • Alternatively, if the remote location doesn’t have network access or you want to keep it airgapped, you could transport the NNCP packets by USB drive to the remote end.
  • Or, if you don’t want to have any kind of processing capability remotely — probably a wise move — you could rotate the hard drives themselves, keeping one plugged in locally and unplugging the other to take it offsite.

The third option can be helped with NNCP, too. One way is to create separate NNCP installations for each of the drives that you store data on. Then, whenever one is plugged in, the appropriate NNCP config will be loaded and appropriate packets received and processed. The neighbor machine — the spooler — would just store up packets for the offsite drive until it comes back onsite (or, perhaps, your airgapped USB transport would do this). Then when it’s back onsite, all the queued up ZFS sends get replayed and the backups replicated.

Now, how might you handle this with NNCP?

The simple way would be to have each system generating backups send them to two destinations. For instance:

zstd -8 - | gpg --compress-algo none --cipher-algo AES256 -e -r 07D5794CD900FAF1D30B03AC3D13151E5039C9D5 \ | tee >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk1:$FILE'") \ >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk2:$FILE'") \ > /dev/null

You could probably also more safely use pee(1) (from moreutils) to do this.

This has an unfortunate result of doubling the network traffic from every machine being backed up. So an alternative option would be to queue the packets to the spooling machine, and run a distribution script from it; something like this, in part:

INCOMINGDIR="/var/local/nncp-bakfs-incoming" LOCKFILE="$INCOMINGDIR/.lock" printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE" if dotlockfile -r 0 -l -p "${LOCKFILE}"; then logit "Lock obtained at ${LOCKFILE} with dotlockfile" trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM else logit "Could not obtain lock at $LOCKFILE; $0 likely already running." exit 0 fi logit "Scanning queue directory..." cd "$INCOMINGDIR" for HOST in *; do cd "$INCOMINGDIR/$HOST" for FILE in bakfsfmt2-*; do if [ -f "$FILE" ]; then for BAKFS in backupdisk1 backupdisk2; do runcommand nncp-file -nice B+5 -noprogress "$FILE" "$BAKFS:$HOST/$FILE" done runcommand rm "$FILE" else logit "$HOST: Skipping $FILE since it doesn't exist" fi done done logit "Scan complete."

Security Considerations

You’ll notice that in my example above, the encryption happens as the root user, but nncp is called under su. This means that even if there is a vulnerability in NNCP, the data would still be protected by GPG. I’ll also note here that many sites run ssh as root unnecessarily; the same principles should apply there. (ssh has had vulnerabilities in the past as well). I could have used gpg’s built-in compression, but zstd is faster and better, so we can get good performance by using fast compression and piping that to an algorithm that can use hardware acceleration for encryption.

I strongly encourage considering transport, whether ssh or NNCP or UUCP, to be untrusted. Don’t run it as root if you can avoid it. In my example, the nncp user, which all NNCP commands are run as, has no access to the backup data at all. So even if NNCP were compromised, my backup data wouldn’t be. For even more security, I could also sign the backup stream with gpg and validate that on the receiving end.

I should note, however, that this conversation assumes that a network- or USB-facing ssh or NNCP is more likely to have an exploitable vulnerability than is gpg (which here is just processing a stream). This is probably a safe assumption in general. If you believe gpg is more likely to have an exploitable vulnerability than ssh or NNCP, then obviously you wouldn’t take this particular approach.

On the zfs side, the use of -F with zfs receive is avoided; this could lead to a compromised backed-up machine generating a malicious rollback on the destination. Backup zpools should be imported with -R or -N to ensure that a malicious mountpoint property couldn’t be used to cause an attack. I choose to use “zfs receive -u -o readonly=on” which is compatible with both unmounted backup datasets and zpools imported with -R (or both). To access the data in a backup dataset, you would normally clone it and access it there.

The processing script

So, put this all together and look at an example of a processing script that would run from cron as root and process the incoming ZFS data.

#!/bin/bash set -e set -o pipefail # Log a message logit () { logger -p info -t "`basename "$0"`[$$]" "$1" } # Log an error message logerror () { logger -p err -t "`basename "$0"`[$$]" "$1" } # Log stdin with the given code. Used normally to log stderr. logstdin () { logger -p info -t "`basename "$0"`[$$/$1]" } # Run command, logging stderr and exit code runcommand () { logit "Running $*" if "$@" 2> >(logstdin "$1") ; then logit "$1 exited successfully" return 0 else RETVAL="$?" logerror "$1 exited with error $RETVAL" return "$RETVAL" fi } STORE=backups/simplesnap INCOMINGDIR=/backups/nncp/incoming if ! [ -d "$INCOMINGDIR" ]; then logerror "$INCOMINGDIR doesn't exist" exit 0 fi LOCKFILE="/backups/nncp/.nncp-backups-zfs-scan.lock" printf -v EVAL_SAFE_LOCKFILE '%q' "$LOCKFILE" if dotlockfile -r 0 -l -p "${LOCKFILE}"; then logit "Lock obtained at ${LOCKFILE} with dotlockfile" trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM else logit "Could not obtain lock at $LOCKFILE; $0 likely already running." exit 0 fi EXITCODE=0 cd "$INCOMINGDIR" logit "Scanning queue directory..." for HOST in *; do HOSTPATH="$INCOMINGDIR/$HOST" # files like backupsfmt2-134.13134_dest for FILE in "$HOSTPATH"/backupsfmt2-[0-9]*_?*; do if [ ! -f "$FILE" ]; then logit "Skipping non-existent $FILE" continue fi # Now, $DEST will be HOST/DEST. Strip off the @ also. DEST="`echo "$FILE" | sed -e 's/^.*backupsfmt2[^_]*_//' -e 's,@,/,g'`" if [ -z "$DEST" ]; then logerror "Malformed dest in $FILE" continue fi HOST2="`echo "$DEST" | sed 's,/.*,,g'`" if [ -z "$HOST2" ]; then logerror "Malformed DEST $DEST in $FILE" continue fi if [ ! "$HOST" = "$HOST2" ]; then logerror "$DIR: $HOST doesn't match $HOST2" continue fi logit "Processing $FILE to $STORE/$DEST" if runcommand gpg -q -d < "$FILE" | runcommand zstdcat | runcommand zfs receive -u -o readonly=on "$STORE/$DEST"; then logit "Successfully processed $FILE to $STORE/$DEST" runcommand rm "$FILE" else logerror "FAILED to process $FILE to $STORE/$DEST" EXITCODE=15 fi

Applying These Ideas to Non-ZFS Backups

ZFS backups made our job easier in a lot of ways:

  • ZFS can calculate a diff based on an efficiently-stored previous local state (snapshot or bookmark), rather than a comparison to a remote state (rsync)
  • ZFS "incremental" sends, while less efficient than rsync, are reasonably efficient, sending only changed blocks
  • ZFS receive detects and enforces that the incremental source on the local machine must match the incremental source of the original stream, enforcing ordering
  • Datasets using ZFS encryption can be sent in their encrypted state
  • Incrementals can be done without a full scan of the filesystem

Some of these benefits you just won't get without ZFS (or something similar like btrfs), but let's see how we could apply these ideas to non-ZFS backups. I will explore the implementation of them in a future post.

When I say "non ZFS", I am being a bit vague as to whether the source, the destination, or both systems are running a non-ZFS filesystem. In general I'll assume that neither are ZFS.

The first and most obvious answer is to just tar up the whole system and send that every day. This is, of course, only suitable for small datasets on a fast network. These tarballs could be unpacked on the destination and stored more efficiently via any number of methods (hardlink trees, a block-level deduplicator like borg or rdedup, or even just simply compressed tarballs).

To make the network trip more efficient, something like rdiff or xdelta could be used. A signature file could be stored on the machine being backed up (generated via tee/pee at stream time), and the next run could simply send an rdiff delta over NNCP. This would be quite network-efficient, but still would require reading every byte of every file on every backup, and would also require quite a bit of temporary space on the receiving end (to apply the delta to the previous tarball and generate a new one).

Alternatively, a program that generates incremental backup files such as rdup could be used. These could be transmitted over NNCP to the backup server, and unpacked there. While perhaps less efficient on the network -- every file with at least one modified byte would be retransmitted in its entirety -- it avoids the need to read every byte of unmodified files or to have enormous temporary space. I should note here that GNU tar claims to have an incremental mode, but it has a potential data loss bug.

There are also some tools with algorithms that may apply well in this use care: syrep and fssync being the two most prominent examples, though rdedup (mentioned above) and the nascent asuran project may also be combinable with other tools to achieve this effect.

I should, of course, conclude this section by mentioning btrfs. Every time I've tried it, I've run into serious bugs, and its status page indicates that only some of them have been resolved. I would not consider using it for something as important as backups. However, if you are comfortable with it, it is likely to be able to run in more constrained environments than ZFS and could probably be processed in much the same way as zfs streams.

Russ Allbery: Review: The Once and Future Witches

Hën, 04/01/2021 - 3:49pd

Review: The Once and Future Witches, by Alix E. Harrow

Publisher: Redhook Books Copyright: October 2020 ISBN: 0-316-42202-9 Format: Kindle Pages: 515

Once upon a time there were three sisters.

They were born in a forgotten kingdom that smelled of honeysuckle and mud, where the Big Sandy ran wide and the sycamores shone white as knuckle-bones on the banks. The sisters had no mother and a no-good father, but they had each other; it might have been enough.

But the sisters were banished from their kingdom, broken and scattered.

The Once and Future Witches opens with Juniper, the youngest, arriving in the city of New Salem. The year is 1893, but not in our world, not quite; Juniper has witch-ways in her pocket and a few words of power. That's lucky for her because the wanted posters arrived before she did.

Unbeknownst to her or to each other, her sisters, Agnes and Bella, are already in New Salem. Agnes works in a cotton mill after having her heart broken one too many times; the mill is safer because you can't love a cotton mill. Bella is a junior librarian, meek and nervous and uncertain but still fascinated by witch-tales and magic. It's Bella who casts the spell, partly by accident, partly out of wild hope, but it was Juniper arriving in the city who provided the final component that made it almost work. Not quite, not completely, but briefly the lost tower of Avalon appears in St. George's Square. And, more importantly, the three sisters are reunited.

The world of the Eastwood sisters has magic, but the people in charge of that world aren't happy about it. Magic is a female thing, contrary to science and, more importantly, God. History has followed a similar course to our world in part because magic has been ruthlessly suppressed. Inquisitors are a recent memory and the cemetery has a witch-yard, where witches are buried unnamed and their ashes sown with salt. The city of New Salem is called New Salem because Old Salem, that stronghold of witchcraft, was burned to the ground and left abandoned, fit only for tourists to gawk at the supposedly haunted ruins. The women's suffrage movement is very careful to separate itself from any hint of witchcraft or scandal, making its appeals solely within the acceptable bounds of the church.

Juniper is the one who starts to up-end all of that in New Salem. Juniper was never good at doing what she was told.

This is an angry book that feels like something out of another era, closer in tone to a Sheri S. Tepper or Joanna Russ novel than the way feminism is handled in recent work. Some of that is the era of the setting, before women even had the right to vote. But primarily it's because Harrow, like those earlier works, is entirely uninterested in making excuses or apologies for male behavior. She takes an already-heated societal conflict and gives the underdogs magic, which turns it into a war. There is likely a better direct analogy from the suffrage movement, but the comparison that came to my mind was if Martin Luther King, Jr. proved ineffective or had not existed, and instead Malcolm X or the Black Panthers became the face of the Civil Rights movement.

It's also an emotionally exhausting book. The protagonists are hurt and lost and shattered. Their moments of victory are viciously destroyed. There is torture and a lot of despair. It works thematically; all the external solutions and mythical saviors fail, but in the process the sisters build their own strength and their own community and rescue themselves. But it's hard reading at times if you're emotionally invested in the characters (and I was very invested). Harrow does try to balance the losses with triumphs and that becomes more effective and easier to read in the back half of the book, but I struggled with the grimness at the start.

One particular problem for me was that the sisters start the book suspicious and distrustful of each other because of lies and misunderstandings. This is obvious to the reader, but they don't work through it until halfway through the book. I can't argue with this as a piece of characterization — it made sense to me that they would have reacted to their past the way that they did. But it was still immensely frustrating to read, since in the meantime awful things were happening and I wanted them to band together to fight. They also worry over the moral implications of the fate of their father, whereas I thought the only problem was that the man couldn't die more than once. There too, it makes sense given the moral framework the sisters were coerced into, but it is not my moral framework and it was infuriating to see them stay trapped in it for so long.

The other thing that I found troubling thematically is that Harrow personalizes evil. I thought the more interesting moral challenge posed in this book is a society that systematically abuses women and suppresses their power, but Harrow gradually supplants that systemic conflict with a villain who has an identity and a backstory. It provides a more straightforward and satisfying climax, and she does avoid the trap of letting triumph over one character solve all the broader social problems, but it still felt too easy. Worse, the motives of the villain turn out to be at right angles to the structure of the social oppression. It's just a tool he's using, and while that's also believable, it means the transfer of the narrative conflict from the societal to the personal feels like a shying away from a sharper political point. Harrow lets the inhabitants of New Salem off too easily by giving them the excuse of being manipulated by an evil mastermind.

What I thought Harrow did handle well was race, and it feels rare to be able to say this about a book written by and about white women. There are black women in New Salem as well, and they have their own ways and their own fight. They are suspicious of the Eastwood sisters because they're worried white women will stir up trouble and then run away and leave the consequences to fall on black women... and they're right. An alliance only forms once the white women show willingness to stay for the hard parts. Black women are essential to the eventual success of the protagonists, but the opposite is not necessarily true; they have their own networks, power, and protections, and would have survived no matter what the Eastwoods did. The book is the Eastwoods' story, so it's mostly concerned with white society, but I thought Harrow avoided both making black women too magical or making white women too central. They instead operate in parallel worlds that can form the occasional alliance of mutual understanding.

It helps that Cleopatra Quinn is one of the best characters of the book.

This was hard, emotional reading. It's the sort of book where everything has a price, even the ending. But I'm very glad I read it. Each of the three sisters gets their own, very different character arc, and all three of those arcs are wonderful. Even Agnes, who was the hardest character for me to like at the start of the book and who I think has the trickiest story to tell, becomes so much stronger and more vivid by the end of the book. Sometimes the descriptions are trying a bit too hard and sometimes the writing is not quite up to the intended goal, but some of the descriptions are beautiful and memorable, and Harrow's way of weaving the mythic and the personal together worked for me.

This is a more ambitious book than The Ten Thousand Doors of January, and while I think the ambition exceeded Harrow's grasp in a few places and she took a few thematic short-cuts, most of it works. The characters felt like living and changing people, which is not easy given how heavily the story structure leans on maiden, mother, and crone archetypes. It's an uncompromising and furious book that turns the anger of 1970s feminist SF onto themes that are very relevant in 2021. You will have to brace yourself for heartbreak and loss, but I think it's fantasy worth reading. Recommended.

Rating: 8 out of 10

Enrico Zini: COVID-19 vaccines

Hën, 04/01/2021 - 12:00pd

COVID-19 vaccination has started, and this site tracks progress in Italy. This site, world-wide.

Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine has a pretty good description of the BioNTech/Pfizer SARS-CoV-2 Vaccine, codon by codon, broken down in a way that I managed to follow.

From the same author, DNA seen through the eyes of a coder

Emmanuel Kasper: How to move a single VM between cloud providers

Dje, 03/01/2021 - 1:58md

I am running since a decade a small Debian VM, that I use for basic web and mail hosting. Since most of the VM setup is done manually and not following the Infrastructure As Code pattern, it is faster to simply copy the filesystem when switching providers instead of reconfiguring everything.
The steps involved are:

1. create a backup of the filesystem using tar of rsync, excluding dynamic content
rsync  --archive \
    --one-file-system --numeric-ids \
    --rsh "ssh -i private_key root@server:/ /local_dir
or
tar -cvpzf backup.tar.gz \
--numeric-owner \
--exclude=/backup.tar.gz \
--one-file-system /

Notice here the --one-file-system switch which avoids back'ing up the content of mount points like /proc, /dev.
If you have extra partitions with a mounted filesystem, like /boot or home you need do add a separate backup for those.

2. create a new VM on the new cloud provider, verify you have a working console access, and power it off.
3. boot on the new cloud provider a rescue image
4. partition the disk image on the new provider.
5. mount the new root partition, and untar your backup on it. You could for instance push the local backup via rsync, or download the tar archive using https.
6. update network configuration and /etc/fstab
7. chroot into the target system, and reinstall grub

This works surprisingly well, and you if made your backup locally, you can test the whole procedure by building a test VM with your backup. Just replace the deboostrap step with a command like tar -xvpzf /path/to/backup.tar.gz -C /mount_point --numeric-owner

Using this procedure, I moved from Hetzner (link in French language) to Digital Ocean, from Digital Ocean to Vultr, and now back at Hetzner.

Jonathan Wiltshire: RCBW 21.1

Sht, 02/01/2021 - 5:36md

Does software-properties-common really depend on gnupg, as described in #970124, or could it be python3-software-properties? Should it be Depends, or Recommends? And do you accept the challenge of finding out and preparing a patch (and even an upload) to fix the bug?

Martin-&#201;ric Racine: Help needed: clean up and submit KMS driver for Geode LX to LKML

Sht, 02/01/2021 - 11:41pd

Ever since X.org switched to rootless operation, the days of the Geode X.org driver have been numbered. The old codebase dates back from Geode's early days at Cyrix, was then updated by NSC to add support for their new GX2 architecture, from which AMD dropped GX1 support and added support for their new LX architecture. To put it mildly, that codebase is a serious mess.

However, at least the LX code comes with plenty of niceties, such as being able to detect when it runs on an OLPC XO-1 and to probe DCC pins to determine the optimal display resolution. This still doesn't make the codebase cruft-free.

Anyhow, most Linux distributions have dropped support for anything older than i686 with PAE, which essentially means that the GX2 code is just for show. Debian is one of very few distributions whose x86-32 port still ships with i686 without PAE. In fact, the lowest common denominator kernel on i386 is configured for Geode (LX).

A while back, someone had started working on a KMS driver for the Geode LX. Through word of mouth, I got my hands on a copy of their Git tree. The driver worked reasonably well, but the codebase needs some polishing before it could be included in the Linux kernel tree.

Hence this call for help:

Is there anyone with good experience of the LKML coding standards who would be willing to clean up the driver's code and submit the patch to the LKML?