[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

backup strategy (was Re: reconstruct the system)



gustavo halperin wrote:
Roberto C. Sanchez wrote:

What can I do ??
Restore from the regular weekly/daily backups you keep.
  I haven't, but what is the best way for do it back up of the system,
 any documentation over there ??

There is no "best way."  Or maybe you could say, the "best way" is the way
that you will use reliably so that the backup is there if and when you need it.
For this reason, I tend to favor simple methods: first a set of simple scripts
that take a few seconds to start, for doing quick snapshots before undertaking
any major system "surgery," and supplementing that, a backup server that
automatically works in the background (and emails me if anything goes wrong).

First I will explain my own system.  I use a custom script which comes in
two flavors, one for backing up locally, and one for backup of another system
on the same LAN.  Both backup scripts are listed below.  The first backs up
locally and is called "root-backup."  It assumes the backup directory is
either /mnt/backup/ or /mnt/$2/backup/ where $2 is supplied as the second
argument.  For the first argument, $1, I always use the hostname of the system.
(I make other obvious assumptions about file paths that may need to changed
to match your own requirements.)

Contents of script "root-backup":
-------------------------------------
#!/bin/bash
W=-backup-in-progress
if [ "$1" == "" ]
then

	echo "***** ERROR (backup-root): missing argument  ******"
	exit 1	
fi

if [ ! -d /mnt/$2/backup/$1 ]
then
	if [ ! -d /mnt/$2/backup/$1$W ]
	then
		echo "***** ERROR (backup-root): /mnt/$2/backup/$1 does not exis
t  ******"
	else
		echo "***** ERROR (backup-root): /mnt/$2/backup/$1 is being upda
ted *****"
	fi
	exit 1
fi
echo $1 backup started `date '+%a %m/%d/%y %T'`
mv /mnt/$2/backup/$1 /mnt/$2/backup/$1$W
rsync -vxaHD --delete / /mnt/$2/backup/$1$W/ >/tmp/rsynclog 2>&1
N=`date +%a_%m-%d-%y_%T`
echo timestamp is $N
mv /tmp/rsynclog /mnt/$2/backup/$1-rsync.$N.log
/mnt/install/test/make-md5sums /mnt/$2/backup/$1$W
touch /mnt/$2/backup/$1$W
cp -al /mnt/$2/backup/$1$W/ /mnt/$2/backup/$1.$N/
cp /mnt/$2/backup/$1$W/md5sums.gz /
date '+%a %m/%d/%y %T' >/mnt/$2/backup/$1-md5chk.$N.log
/mnt/install/test/check-backup / >>/mnt/$2/backup/$1-md5chk.$N.log  2>&1
date '+%a %m/%d/%y %T' >>/mnt/$2/backup/$1-md5chk.$N.log
mv  /mnt/$2/backup/$1$W /mnt/$2/backup/$1
---------------------------------------------------------------------------

The second script backs up another system on the local lan.  It's called
"remote-root-backup" and works similarly to local backup, but requires ssh
to be installed on both systems.

Contents of script "remote-root-backup":
-------------------------------------
#!/bin/bash
W=-backup-in-progress
if [ "$1" == "" ]
then

	echo "***** ERROR (backup-root): missing argument  ******"
	exit 1	
fi

if [ ! -d /mnt/$2/backup/$1 ]
then
	if [ ! -d /mnt/$2/backup/$1$W ]
	then
		echo "***** ERROR (backup-root): /mnt/$2/backup/$1 does not exist  ******"
	else
		echo "***** ERROR (backup-root): /mnt/$2/backup/$1 is being updated *****"
	fi
	exit 1
fi
echo $1 backup started `date '+%a %m/%d/%y %T'`
mv /mnt/$2/backup/$1 /mnt/$2/backup/$1$W
rsync -vxaHD --rsh=ssh --numeric-ids  --delete root@$1:/ /mnt/$2/backup/$1$W/ >/tmp/rsynclog 2>&1
N=`date +%a_%m-%d-%y_%T`
echo timestamp is $N
mv /tmp/rsynclog /mnt/$2/backup/$1-rsync.$N.log
/mnt/install/test/make-md5sums /mnt/$2/backup/$1$W
touch /mnt/$2/backup/$1$W
cp -al /mnt/$2/backup/$1$W/ /mnt/$2/backup/$1.$N/
scp /mnt/$2/backup/$1$W/md5sums.gz root@$1:/
ssh $1 "date '+%a %m/%d/%y %T' >/tmp/$1-md5chk.$N.log"
ssh $1 "/mnt/install/test/check-backup / >>/tmp/$1-md5chk.$N.log  2>&1"
scp -p root@$1:/tmp/$1-md5chk.$N.log /mnt/$2/backup
date '+%a %m/%d/%y %T' >>/mnt/$2/backup/$1-md5chk.$N.log
mv  /mnt/$2/backup/$1$W /mnt/$2/backup/$1
--------------------------------------------------------

Some additional notes follow: these scripts use rsync and hardlinks in a way
that automatically "compresses" successive backups, by only updating changed files,
and creating hardlinks for the rest.  After running rsync the scripts create a list
of md5sums for the backup and then check that list against the backed up root file
system.  (The scripts "make-md5sums," "check-backup" and a few other related scripts
were posted in my message to debian-user called "My local debian archive maintenance
scripts," posted on May 12 and accessable in the archives.  Alternatively you could
just delete the lines calling those scripts, which are just double-checks for backup
integrity.  Part of my backup philosphy is "what good are backups if you can't validate
their file integrity?")  Finally, these scripts assume a single partition other than
what's mounted under the /mnt directory (which I suppose violates the FHS, oh well..
If you use multiple partitions then you will probably have to modify the rsync
parameters, by removing the -x and adding some --exclude parameters.)

On systems which I consider (by my standards) "mission critical" I keep a "live" mirror
of my root filesystem consisting of an old small obsolete hard-drive (that otherwise
would have been retired years ago). Some of these I've rescued from the dumpster at
companies where I've worked.  I periodically update these mirror drives in a few minutes
with a simple one-command rsync script.  Then I manually adjust fstab and lilo.conf, and
make them boot themselves to serve as spare root filesystems.  I can't say how many
times such a spare mirror drive have "saved my bacon."

If my backup scripts don't quite do it for you, here is an excellent web page with
a general discussion of rsync as a backup tool, with more elaborate backup script,
and links to other backup scripts: http://www.mikerubel.org/computers/rsync_snapshots/

In addition, debian supplies a number of backup solutions, from single user to enterprise
grade.  Use a package search tool like xara or search the package archives on debian.org
to get an idea of the selection.

Finally as I mentioned, another thing I do for backup is run an old PC as a backup
server using backuppc, which is a debian-supplied backup application.  Backuppc
automatically does daily backups of all Win and Linux PCs on a LAN, has a nice web
interface, uses hard drive storage, backs up securely over an ssh connection, has
an active user mailing list, and has always worked very well for me on my home LAN.

Last, my 2c on this topic.  A lot of people switch to Debian from the 'doze world
where backup is not only difficult, but possibly expensive, and they may carry
over some bad habits and "live dangerously" when it comes to backing up their
data.  I think it's worth taking the time to do develop a good long-term backup
strategy and teaching others how to do it.  Fortunately I never had to learn the
hard way, but when and if you have a disaster let it serve to reinforce good
backup habits, as a way of redeeming the bad experience if nothing else.



Reply to: