[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: best way to keep web servers in sync



Hello Craig Sanders <cas@taz.net.au>,

Thanks for your excellent experiences. Ok. This is my situation.
I have a stage environment, in which web files are ready for testing and
go to the productive servers. If I use rsync with --delete parameter on
a directory, it will rsync file by file onto the productive server.
Chances are if the process terminates during the rsync procedure, the
whole web content will be broken and not intact as a whole. This is not
supposed to happen. But if I rsync to a tmp dir and use "mv" after rsync
finishes, I will take very little risk on the whole procedure.  In this
case, I may need to write a shell script to do the rsync+mv to avoid the
broken archive. Is there any good way to implement this?


After rsyncing from stage env. to productive, I need to invoke rsync
among productive web servers. Again, I still need rsync(+ssh?)+mv to
eliminate the risk if it terminates. Chances are I may need to write a
rsync wraper, which use rsync(+ssh)+mv to make a high-available remote
syncronization on files or directories. Say,

hirsync <parameter> <local_file_or_dir> host:<remote_file_or_dir>

Does it make sense?


Last but not lease, is it a good way to use rsyncd among web servers
instead of rsync+passwordless ssh?







On Mon, 4 Mar 2002 08:38:13 +1100
Craig Sanders <cas@taz.net.au> wrote:

> On Sun, Mar 03, 2002 at 11:20:26PM +0800, Patrick Hsieh wrote:
> > I have some apache web servers under the server load-balancer.  To
> > keep the static html and php files in sync among them, I have to use
> > NFS to achieve the purpose.  My problem is, since NFS is not quite
> > stable and is prompt to be a network bottleneck, I have to find other
> > methods to keep the files in sync.
> > 
> > I am considering to use rsync, but I have to write scripts to automate
> > the rsync behavior. 
> 
> rsync would be a good way of doing it.  the scripting involved is
> probably very simple unless you have a very weird setup.
> 
> it could be as simple as something like:
> 
>     #! /bin/bash
> 	
>     SERVERS="server1 server2 server3 server4"
> 	DIRECTORIES="/var/www /usr/lib/cgi-bin"
> 
> 	LOCKFILE="/var/run/rsync-in-progress"
>     
> 	# exit if the script is already running
> 	[ -e $LOCKFILE ] && exit 0
> 
> 	touch $LOCKFILE
>     
>     RSYNC_ARGS="-e ssh -ar --blocking-io --delete"
> 
>     for host in $SERVERS; do
> 	  for dir in $DIRECTORIES ; do
>         rsync $RSYNC_ARGS $dir/ $host:$dir
>       done
>     done
> 
> 	rm -f $LOCKFILE
> 
> it is important to make sure that only one instance of the rsync script
> is running at a time.  this script uses a very simple lockfile method.
> 
> this script assumes that ssh is already set up to allow passwordless
> access from the "master" server to the "slaves".
> 
> depending on the amount of data to be rsynced, the script above probably
> won't scale beyond 5 or 10 servers...it would take too long to iterate
> through all servers.
> 
> 
> > Another criteria is, the files could be updated in
> > any server among them, after that, the first updated one should rsync
> > to others to make EVERYTHING in sync.
> 
> the major problem with using rsync for this job is, as you pointed out,
> that the files might be updated on any server.  the best way to solve
> that is by designating one of the machines as "master" and require all
> uploads to go that machine...by simplifying the problem from a
> many-to-many situation to a one-to-many situation, you avoid all of the
> more difficult synchronisation issues.
> 
> 
> the second major problem is that rsyncing hundreds of megabytes to
> multiple servers is a heavy load for a machine, so you can't be running
> it out of cron every 5 minutes.  so the update is not "real-time".
> there are several ways to minimise (but not eliminate) this
> problem...here's a few ideas:
> 
> 1. monitor /var/log/auth.log and trigger the rsync script when an ftp
> session disconnects.
> 
> 2. have a cgi script allowing users to click on a button to trigger the
> rsync script.
> 
> 
> also run the rsync script from cron every few hours.
> 
> 
> 
> > Any good suggestion? 
> 
> read the rsync pages at http://rsync.samba.org/ (especially the
> FAQ-o-Matic) and the rsync tutorial at http://everythinglinux.org/rsync/
> 
> 
> another possibility is to use a different network file system.  i
> haven't used it myself, but CODA is supposed to be pretty good, and IIRC
> supports some kind of off-line operation (in case the file server is
> unavailable) via a cache on each client.
> 
> 
> craig
> 
> -- 
> craig sanders <cas@taz.net.au>
> 
> Fabricati Diem, PVNC.
>  -- motto of the Ankh-Morpork City Watch
> 
> 
> -- 
> To UNSUBSCRIBE, email to debian-isp-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

-- 
Patrick Hsieh <pahud@pahud.net>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg



Reply to: