On Fri, May 18, 2007 at 02:21:50PM -1000, Kar-Hai Chu wrote: > > Due to a tight budget, we do not have a live redundant backup to our > production server (other than its RAID 1). One thing we *do* have is > hard drive space - so we've been trying to setup a process where live > production data (mysql, apache) is backup up nightly onto the > development machine, so if the production server goes down, we can > "flip" the development server into production mode (move development > data aside, and symlink to all the backed up production data). > I think you want the fake package for the IP switch: Fake is a utility that enables the IP address be taken over by bringing up a second interface on the host machine and using gratuitous arp. Designed to switch in backup servers on a LAN. As far as the other stuff, you should probably write a script that does everything as it seems there is lots to do. Doing it manually will very likely be error-prone. Also, you definitely make sure that any databases are being dumped properly on the production machine and then restored on the testing machine. That is, simply using something like rsync or scp to transfer the on-disk files that contain the database cluster is *not* a valid backup stratedy. That said, if you have lots of space on the backup server, you might want to look into systemimager to create a snapshot of the entire production server's filesystem. Just remember about the proper way to get the databases backed up. Here is a script I use to backup my postgresql cluster: ---------8<--------->8--------- #!/bin/sh # # pg_backup.sh - performs periodic backups of a PostgreSQL database cluster # if [ ! -d /var/local/backup/postgres ]; then echo "ERROR: Backup directory, /var/local/backup/postgres, not found" exit 1 fi cd /var/local/backup/postgres # Remove the last backup file rm -f postgres-$(hostname).9.sql.gz # Rotate the rest down for i in `seq 8 -1 0` ; do if [ -f postgres-$(hostname).$i.sql.gz ]; then mv postgres-$(hostname).$i.sql.gz postgres-$(hostname).$(($i+1)).sql.gz fi done # Create the new one su -c 'pg_dumpall --clean >/var/local/backup/postgres/postgres-$(hostname).0.sql' -s /bin/sh - postgres gzip -9 postgres-$(hostname).0.sql ---------8<--------->8--------- It keeps the ten most recent dumps. Since my cluster is small, it works nicely for me. I just put the script somewhere and then put a cron entry that runs hourly (you can change the frequency to suit your own needs). Then, if I need to restore the database, I can just run: su -c 'gzip -cd /var/local/backup/postgres/postgres-foo.0.sql.gz |psql -d template1 -f -' -s /bin/sh - postgres Regards, -Roberto -- Roberto C. Sánchez http://people.connexer.com/~roberto http://www.connexer.com
Attachment:
signature.asc
Description: Digital signature