Why my udev backup script kept failing and how I fixed it

I wanted to use udev to sync my backups to an external usb drive but the rsync-based shell script kept mysteriously stopping before the job was done.

When I say ‘use udev for backup’ what I mean is to use udev’s discovery of me plugging in the offline backup drive to fire off the backup script. Plug it in and the synchronization starts automatically. When it’s done it will automatically unmount the drive and inform me of the outcome via email. Sometimes there would be several weeks of new backups equalling gigabytes of data and so it would be convenient to just plug it in and leave it to do it’s thing.

The script was not apparently crashing wildly, just at one point it is running and the next it is not.  htop shows it to be running. iotop shows that data are being transferred. And then poof, suddenly it’s gone, even though there were still tens of Gbs left to transfer.

Luckily, the explanation is simple. Udev can fire off scripts but they are supposed to be very shortlived. Syncing 40 Gb over a USB 2.0 connection takes over an hour. The script is killed by the system after a few minutes. I found this elegant solution in an opensuse forum post.

Why use udev for backups

The motivation for keeping offline backup files in addition to the backup files came from the discovery of linux ransomware. Every night a backup script runs on my server and so I always have all of 90 days worth of WordPress database snapshots amongst other things on my backup drive. That wouldn’t be any help if every one of them were encrypted and held to ransom along with the source files. So to prevent such a situation – and just to insure against drive failure – I want copies of those backup files on a drive that is not always connected to the box.

udev is the dynamic device manager for linux, that is, it is the service that maintains the /dev directory. It can be configured by way of one-liner configurations called rules. Udev rules specify what should happen when a specific hardware event takes place, e.g. a device appears so a symlink is created or a drive is automounted.

So this is the idea: Using a bit of scripting and a udev rule the offline backup drive is automatically mounted upon insertion and then synced with the permanent backup drive before it’s unmounted again. Notifying me that the operation was finished and successful would be an added bonus.

As I generally have to ssh into the box to get a terminal, having udev auto-trigger all this for me rather than doing it manually makes sense. Making it this easy will certainly encourage me to keep the offline drive updated.

Udev backups: The wrong way

The solution you will find if you google for it consists of two parts:

  • A basic bash script that basically a) mounts the offline backup drive, b) runs an rsync job and c) unmounts and possibly notifies the user.
  • A udev rule that looks for the specific USB drive and fires off the bash script

As mentioned in the start of the post, this doesn’t work quite right unless you’re only moving small amounts of data. To get it working properly I had to insert a systemd service file in between the two.

Part 1: Rsync

The rsync line is fairly simple:

rsync -aP --delete --log-file="/tmp/offline" "/mnt/wpbackup/" "/mnt/wpoffline"

The parameters are

  • -a is for archive mode which preserves permissions and ownerships, makes rsync run recursively and many more things.
  • the -P flag combines  resume partially transferred files and progress indicator (not that I have much use for the latter)
  • — delete removes files on the destination that are no longer on the source. My backup script creates new backups and then deletes any that are older than 90 days. As both drives have limited space this is necessary. If this is not an issue, then just remove the flag.
  • — log-file redirects output to a file that I then mail myself once the scripts is done.

Note that the source (“/mnt/wpbackup”) ends in forward slash. Otherwise the source directory is recreated inside the destination. This may or may not be desirable.

Part 2: Udev

Udev rules are put into the /etc/udev/rules.d folder. Each rule is a single line with intermingled parameters that describe what to match to and what to do when something matches. It’s not exactly the most easy-to-read format but we’ll work with what we have got:

SUBSYSTEM=="block", ATTRS{serial}=="08605E6D401FBFA1470A1A42", SYMLINK+="offlinebackup", RUN+="/path/to/test_script.sh"

Matching is done in the familiar way of double equal sign while asssignment of new values is accomplished by a single equal sign. Adding new properties is done by ‘+=’.

Let’s say we want to match a drive that is currently plugged in. The single partition on the drive has been given the device name ‘sde1’. Running udevadm info -a /dev/sde1 will provde a list of attributes for the partition and it’s “parents” that we can match against. The parent of the partition would be the drive, the parent of that the disk controller, etc. Two important things to keep in mind:

  • You can match against any level in the tree that udevadm info presents: Partition, drive, disk controller, usb, etc. Obviously they all go together when the drive is inserted so it is all the same ‘event’. We do however want to specifically target the partition because that will give us an easy way to pass the assigned device node name (sde1 at the moment but maybe sdg1 next time it is plugged in) on to systemd.
  • You can match against the specific level you want (e.g. it has be the first partition of a drive and have this particular size) and add parent level attributes to narrow it down: Manufacturer has to be Seagate, productid is 5700) However you can only use attributes from one single parent level.

Those were the hard rules. Here’s a soft one: When you’re start a udev rule, stick to ONE condition and ONE action (and make it one that cannot fail and is easily checked). I always start off with this extremely simple script :

#! /bin/bash

# This script can be used to test if a udev rule fires
# when it should and/or if it is triggered multiple times.
# Simply put the path to the script into the 
# RUN+="/path/to/script" parameter and observe the outcome
# in the /tmp directory. The script is triggered once for each
# udev-* file in the folder.

mdate="$(date +%s)"

touch "/tmp/udev-${mdate}-${muuid}"

exit 0

As implied in the comemnts another reason to do it this way is to see if the script triggers multiple times. Because the conditions can apply to parents as well as the level you’re aiming for, I had a single ATTRS{serial} condition trigger five times, because ‘serial’ is an attribute of the usb disk controller and under that you have a number of levels all the way down to the actual partition. Once I added SUBSYSTEM==”block” as a condition we were down to two hits. With the final condition ATTR{partition}==”1″ I got a single match, obviously, as there can only be one first partition on the disk.

The most immediate way for most people would probably be to match against partition labels or uuids. As pointed out by superuser Perseids UUIDs aren’t directly accessible by udev but they can be used by way of environment variables:


However, for whatever reason I haven’t had much luck with UUIDs in this context. Sticking with native udev attributes, I find that you get there most easily with 1) a serial number for the drive and 2) a partition number (and helpfully, optionally, if a little superfluously) 3) limiting matches to block subsystems. The only scenario I can envisage where this wouldn’t be enough is if you have multiple copies of the same exact drive  and the manufacturer didn’t bother with unique serial numbers for each drive (I’m guessing they mostly don’t). With any file operations, you really don’t want to trigger it twice of more times, so be sure that you only get one before actually using it.

You can find the full lowdown on how to create udev rules here. For now, we just identify the drive and run the test script.

Part 3: Systemd

Ultimately, though, we want udev to kick off a systemd service that runs the backup script. Starting a systemd service is done in the blink of an eye so udev can fire and forget and not worry about the script dragging on. Meanwhile systemd keeps control of the script by monitoring the service. As side benefit we can use ‘systemctl status’ to check up on progress even though the script is running in the background.

Here’s the systemd service unit, let’s call it udev-usb-insert@.service so that we can pass the device name to it:

 Description=Backup to USB Flash Disk

 ExecStart=/path/to/offline_backup.sh %i

Which is about as simple as can be. TBH I’m not sure if this calls for a simple or a oneshot type of service but things appear to be working just using simple. Your backup script should obviously do everything that is needed. If your system does not automount drives, you put that in there, etc. etc.

We can now change the udev rule to start this service:

..., RUN+="/bin/systemctl --no-block start udev-usb-insert@%k.service"

I believe that “RUN+=” allows for triggering multiple scripts (as opposed to just “RUN=”) but I haven’t tested it. From the reference udev rule writing article by Daniel Drake:

%k evaluates to the kernel name for the device, e.g. “sda3” for a device that would (by default) appear at /dev/sda3

To recap:

  • The USB drive is inserted.
  • The udev rule triggers.
  • The “RUN” parameter of the udev rule starts the systemd service unit.
  • The systemd service unit runs the script. Any output from the script can be monitored using systemctl status [service].
  • Once the script exits, the systemd service ends.

You should obviously test things from down and up: First the backup script, then the service, and finally the udev rule.

Photo by Anton Fomkin

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.