Am 11.03.2017 um 16:10 schrieb Richard Owlett:
> I've been good about telling others that backups are a good idea.
Hi,
i know i am late to the party. And without knowing any ready made
documentation, let me add a few things out of the top of my head:
1. Backups saved my mental health on numerous occassions, even while
still on Windows
2. Although i came across tools to automate the process, i have never
accepted having to trust anything to be - or not to be - in control
of critical data, except for myself
3. An understanding of how to set up a bootable system (be it from
backup or new) seems crucial to me, as some understanding of grub +
initramfs appears to be extremely useful
4. Linux has every tool necessary on board, a first backup is easy to
make using (f.i.) systemrescuecd
5. And logging all the steps done, while doing them, is the first step
to a scripted solution
6. Such a log - turning into a script - paired with the corresponding
restore - is evolving naturally with time and experience...
Here are some lessons, i learned while being involved with backing up:
* Different kinds of data need to be considered, according to their
turnaround and usefulness (a.k.a. backup cycle interval). I adhere
to 3 types today: (os=critical=when needed: on occassion several
times a day, regular cycle: almost everything else backing up every
2 weeks currently, and dont care/throw away data, which doesnt stop
me from backing it up, but more of the fire and forget kind)
* always log the reason/state of the backup, similar to git, as that
logfile turns into a valuable resource useable even after years
* where to backup to? Initially, i was using the machine to backup for
holding the backup, which is a really bad idea. Today, i am using a
pluggable external device with several drives configured in a RAID
setup.
* In order to save space, a COW (copy-on-write) fs turns out to be
mega useful. My recommendation being ZFS (at least on the backup
devices), as that allows keeping several backups/restore points in
one place using incremental backup.
Example: OS (uses 4.5 GB) has 20 incremental backups, making up
around 16.5 GB = 4.5 + 12 for snapshots in total.
* for some strange reason, i switched to imaging lately (using
zerofree + compression, loop mount + rsync), and the gain in
restore-time is mind-blowing, as a simple dd was enough to restore
one entire snap from a OS. BTW: in the meantime, i switched to using
ZFS for the real data mountpoints as well
* The bash scripts used for backup/restore evolved to 9+6K due to my
habit to make lots of asserts/checks as a safety measure before
proceeding.
That kind of approach may not be useful to everybody, as it is some
investment into one self's know-how. But you gain flexibility and
creative handling options, like just restoring the exact file(s) of even
just comparing the differences. And after having gone through different
real world scenarios (hardware failures of different kinds) i know how
safe i am!
But for those only interested in a ready made solution: You are going to
run into a major difficulty sooner or later, that hasnt been foreseen.
Hopefully, you'll have 3 Versions ready (original, backup, next-to-last
backup) and find some admin worth trusting and able to handle your
situation manually. :-)
Those are my 2 cents.
DdB (reading only the lists digest, thus not replying directly)
Attachment:
signature.asc
Description: OpenPGP digital signature