Re: Debian in Server Farm
On Tue, 30 Mar 2004, Michael Bellears wrote:
> We are in the process of migrating an overburdened Debian
> 3.0/Apache/qmail box into a webfarm setup.
> Looking at using a ServerIronXL for loadbalancing.
> Would appreciate anyone's experiences/recommendations on the following
> 1. What is the recommended method to synch config files on all "real"
> servers (Eg. Httpd.conf, horde/imp config files etc?) - Have only one
> server that admins connect to for mods, then rsync any changes to the
> other servers?
I asked a similar question a few months ago and someone suggested
'cfengine'. I started using it and, after a bit of learning curve, I have
probably 30 machines (Debian woody) being managed automatically by it. It
works great. I think the version in woody is old, so I got it from the
upstream site. Basically you can store configuration files and other
"actions" on a master server. Then you can cause (through cron, for
example) each client machine to be updated with current config files and
other "actions". These files can be scripts, so essentially you can do
pretty much whatever you want to do.
For example, I have a list of the Debian packages that should be present
as one of the config files that gets transferred to each machine when
cfengine runs on the master. There is another script that runs on each
machine (also controlled by cfengine) that sets this new list of packages
(dpkg --set-selections) and then runs apt-get update/upgrade, etc. So to
add a package to my machines I just edit the one package file on the
master and then the clients get update either when cfengine runs through
cron (once a day for me) or you could run it manually at that time if you
needed the update sooner. It works really well.
> 2. What about logfiles - We would have all users mail etc on an NFS
> share - Can you do the same for logfiles?(Or do you get locking issues?)
> - From a statistical aspect, it would be a pain to have to collaborate
> each "real" servers logfiles, then run analysis. Also from a support
> perspective - How are support personnel supposed to know which "real"
> server a client would actually be connecting to in order to see if they
> are entering a wrong username/pass etc?
I don't have a lot of experience with this but I would configure syslogd
to send logging info to a master "log server". I think it is clear that
which host they came from in this configuration.
> 3. Imaging of Servers - I have looked at SystemImager
> http://www.systemimager.org/, and it looks to do exactly what I want
> (i.e. be able to create a bootable CD from our SOE for deployment of new
> serverfarm boxes, or quick recovery from failure) - Can anyone provide
> feedback as to it's effectiveness?
I am still struggling with systemimager. The machines I want to image have
gigabit Ethernet devices that require a newer kernel than was available
when I first tried it (about 2 months ago). I didn't have the time to get
it working, but I don't think it was its fault. I had trouble getting a
new kernel compiled with the new Ethernet driver and ran out of time.
Hopefully I can get back to it, because it does seem like exactly the
right tool for the job.