[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Linux Virtual Server question(s)



On Sun, Feb 17, 2002 at 10:38:56AM -0500, Keith Elder wrote:
> I am in need of adding more servers for redundancy sake as well
> performance.  I have beeen looking at using the Linux Virtual Server
> project for load balancing and redundacy of accounts.  I thought I
> would see if anyone on the list is using LVS, has any thoughts or
> would like to recommend something better.  
> 
> The main things I am looking for are:
> 
> * easy to add new nodes 
> * easy to manage user accounts 
> * new nodes being added don't have to be hardware specific 
> * load balancing and some fault tolerance

LVS does all of the above, except for managing user accounts.  that's
not it's job - you would use other standard tools for that (e.g. LDAP or
NIS).  if you want the accounts to have the same home directory on all
nodes then you need some kind of file server for the home directories.

e.g. for a web server farm with many virtual hosts, with each vhost
having all related files * directories (cgi-bin/, public_html/, etc) in
a single account's home directory, the minimum you would need is:

1 x file server   (lots of RAID5 disk, memory & cpu. maybe gigabit ethernet)
1 x load balancer (a celeron with 128MB or better.  100baseT)
n x web servers   (a celeron with 256MB or better.  100baseT)
1 x 100baseT switch (perhaps with 1 Gigabit port)

the file server would provide access to /home via NFS (or perhaps via
CODA).  it might also run a postgres or other database server if that is
a requirement for any of the vhosts (alternatively, run a separate
database server).

you would run an LDAP server (to provide account & auth info to the web
servers) on either the file server or the load balancer (the LB is going
to be mostly idle - switching packets is NOT hard work for a celeron at
all, and barely touches the disk).

the file server would also run an ftp daemon - this is where users would
upload their web pages to.

the load balancer would be configured to balance port 80 traffic over
the web servers.

the web servers would be configured to get their account info from LDAP,
and mount /home from the file server.


for extra redundancy, add a second load balancer and replicate the LDAP
store to a second LDAP server (perhaps running on the 2nd LB or on one
of the web servers)

the file server constitutes a single point of failure.  if it dies, the
whole web farm goes down with it.  this is the hardest (and most
expensive) thing to deal with.  you can either have a second file server
ready to swap in, or use a storage area network (SAN).  there are no
good & cheap solutions to this - you either accept the fact that a
failure here will cause some downtime (you can take steps to minimise
the downtime) or you spend a fortune on a SAN.

some ways of minimising down time for the file server are:

 - use RAID5 with at least one hot-spare drive, and all drives in
   hot-swap cradles.
 - also have spare drives just sitting on a shelf ready to swap in as
   soon as a drive dies (raid5 can recover from one drive dying but not
   two simultaneously, so you want to replace the dead one ASAP)
 - which implies that you need to burn-in your raid array for a few
   weeks at least before going live so you can eliminate the dud drives
   (if any) from the batch you bought.
 - have the drives in an external hardware raid drive box.  if the
   motherboard or scsi card on the file server dies, you can swap in a
   replacement machine in a few minutes (or have it already connected up
   to the scsi chain and use heartbeat to have the spare FS take over
   automatically)


Some possible variations:

if you went the fibre-channel SAN route and used OpenGFS you could
eliminate the file server...but a) that's extremely expensive, and b)
you'll still need a database server for any database backed web sites.

alternatively, if it's just for one web site or a handful of web sites
(rather than many vhosts) you could eliminate the file server - a)
designate one of the web servers as "master" b) use rsync from a cron
job to periodically update the other web servers.  i doubt if this would
scale beyond a dozen or so sites - the maintainence overhead would be
too high.


> I look forward to hearing what the list recommends.  If you are
> currently using LVS, I would be very interested as to how you went
> about setting things up.

in my experience, LVS is *very* easy to set up.  the documentation is
very good.  i got it working (for a squid cache array) in a few days
about 18 months ago just by reading the docs and experimenting and
following the instructions.  I would recommend using LVS over any of the
commercial layer-4 switches on the market.

setting up LVS is easy.

sharing accounts and files in a load-balanced server array is not so
easy, and requires a lot of advance planning to get it right.

you have a lot of reading ahead of you.  and then a lot of work. good
luck.

craig

-- 
craig sanders <cas@taz.net.au>

Fabricati Diem, PVNC.
 -- motto of the Ankh-Morpork City Watch



Reply to: