[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: write only storage.



On Tue, 21 Sep 2021, David Christensen wrote:

On 9/21/21 8:53 AM, Tim Woodall wrote:
I would like to have some WORM memory for my backups. At the moment
they're copied to an archive machine using a chrooted unprivileged user
and then moved via a cron job so that that user cannot delete them
(other than during a short window).

My though was to use a raspberry-pi4 to provide a USB mass storage
device that is modified to not permit deleting. If the pi4 is not
accessible via the network then other than bugs in the mass storage API
it should be impossible to delete things without physical access to the
pi.

Before I start reinventing the wheel, does anyone know of anything
similar to this already in existence?

Things like chattr don't achieve what I want as root can still override
that. I'm looking for something that requires physical access to delete.


Have you considered snapshots -- e.g. btrfs, LVM, or ZFS?


I don't see how they help me - I am already using snapshots to create
the backup. But if I can create the snapshot, I can delete it again?

I didn't put all this detail in the original as I didn't think it was
important (and it can all be changed) but, taking the example of
einstein, which is the machine that this email went through.

A cron job runs as root that takes an LVM snapshot, uses dump to dump
the filesystem and uses ssh to write that dump to backup@backup17. It
then runs restore to verify the backup. It then deletes the snapshot. I
also save the output of df, fdisk -l and mount along with a separate
copy of dumpdates (these are purely to make it as easy as possible to
recover after a total hard drive failure - of which I've only ever had
happen once. As I'm writing this I realise I should also save the output
of vgdisplay and lvdisplay)
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=940473 was a bug I
found via this process and I should have reported years earlier. I took
a look and couldn't fix it in a few hours so I just sat on it. :-( The
maintainer fixed it in a weekend :-)

On backup17, a cron job moves the files from where backup@backup17 put
them (which was in a chroot) to a different directory where they cannot
be accessed via backup@backup17. (Again, while writing this I realize
that they're still owned/writable by backup - I will change this so that
even if you managed to escape from the chroot you cannot
read/delete/modify them)

This is a pseudo "write only" filesystem. Within a couple of hours of
writing the file it cannot be read again (by the user that wrote it). I
cannot see a way of making it truely write only and preserve the
verification step (and that particular attack surface - someone copying
the backup while it's being written - isn't one that I'm particularly
concerned about)

Manually (but I ought to automate it too) I run a script that then takes
the backups and adds them to a udf image on a usb stick sized to fit on
a blu-ray disk. (I have both an encrypted and a plain image here. I use
the encrypted one for off-site backups (which, for example, I
occasionally post to a friend) and the plain one for my local backups
which I sometimes do use). I have also securely stored the key
off-site.

Finally, once the udf image is full I write the image to disc, verify
the hash, and then delete all the intermediate parts to free up space to
continue.

I've been doing this for nigh on 25 years now, from cd to dvd to blu-ray
with various tweaks along the way and I've never lost anything
important.

But I'm conscious that to an extent I've been lucky. I do my best to
keep secure but this is a hobby, not a full time job, and it's getting
harder and harder to do a belt-and-braces approach to security. The
ubiquitous use of javascript nowadays, https everywhere, ESNI,
everything now needing internet access to work, it's getting harder and
harder to ensure things are kept quarantined. Hopefully I'm too boring
for anyone to specifically target but I'd like to close the last few
gaps in the "just got unlucky" stakes. In particular, if anyone got root
access to the xen host then everything not yet written to blu-ray is
vulnerable. As of today, that would mean that einstein, for example,
could be restored to 20210904 but anything after that date would be
lost.

The suggestion by Thomas Schmitt to write multiple sessions is a good
one. I hadn't thought of it, partly because my blu-ray writer is an
external device that I don't leave permanently connected. But I could
resolve that. If I wrote one session per day that would be c 30 sessions
per disc. I will need to do some experimenting as I don't have any
experience of writing multi-session disks. I'd also need to find a drive
where I can verify what has been written without it ejecting the disk
first (or at least be able to reload the disk automatically).



Reply to: