[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Multiple servers for 1 domain name?




On Monday, February 10, 2003, at 02:24 PM, Thomas Lamy wrote:

Eric Jennings wrote:
On Mon, 2003-02-10 at 16:28, Jason Lim wrote:
Hi All,

I was wondering if you guys are aware of any solution for multiple
servers
to server 1 domain name?

That is... like those big ISPs that have "user" webhosting.

http://members.isp.com/joe/ (goes to server no. 5)
http://members.isp.com/jane/ (goes to server no. 3)
http://members.isp.com/someone/ (goes to server no. 2)
[...]
Basically the reason for doing this is because the existing single
server
is overloaded, and need to split the workload to 1 or 2
more servers.

Some of the problem might be solved by moving the database to a
dedicated machine. If that doesn't spread the load enough,
doing a DNS
round-robin (having member.isp.com resolve to two different machines
with exactly the same setup) might solve the problem. The web
directories etc. (especially writeable areas for cgi programs) would
need to be shared, with NFS for example (which might create some
locking
problems so you'd need to be careful...).

Of course, this causes requests going to a random server - so PHPs
session tracking ability etc. will have problems. If you use such
things, investigating just upgrading the server first could already
bring relief (where is the performance problem exactly? Disk? CPU?
Memory?)

If you start implementing the server farm architecture, and have the
problem of PHP sessions, SSL sessions, etc... Then you may want to
invest in a hardware web switch, like an F5 or Foundry.  Yep, they're
expensive, but super fast and they will hold a particular user's
session to a single server for the duration of that session,
fixing the
problems of the PHP/SSL issues.

Also, they have the added feature of being able to check
heartbeat/pings of each server in the cluster.  A particular server
goes down, and the switch will automatically redirect
requests to other
servers, until that broken server comes back online.  DNS round robin
will continue to resolve to a broken server, thus making your uptime
availability = numgoodservers / totalservers.   Not a good thing if
you're running two servers, and one goes bad.  Instant 50%
availability.  Furthermore, you can assign weights to each server, so
if you have some old system that you still want to be in the cluster,
you can add them with a lower weight, and they'll get hit
less than the
brand new Dells you just bought (<- that one's for you
Russell Coker. :)

One may also use LVS (http://linuxvirtualserver.org/) for that. It's free,
and it runs like a charm. There is also connection persistence, one may
choose between different weighting algo's, etc etc. And with keepalived
(http://keepalived.sourceforge.net/) you may set up two LVS directors in a
high available fashion, and have full control of how and how often the
server's health is being checked (stoopid thigs like tcp connect only, or
md5-hashes of different pages, or ...).

We have taken this approach, and are very happy with it.

We solved the problem of having only one MySQL server (SPoF) by adding a
second one, replicating from the main server, but the problem regarding
syncing file systems on the web servers is yet to be resolved. I dropped the first idea of using one central file server with NFS - mostly because of NFS itself, and because this would be another SPoF. Our tests with coda were
also stopped, because of (a) problems with lock-ups, and (b) the admin
involved with the tests leaving. For now we stick to one master server
(located on one of the LVS directors), regulary rsync'ing to the web
servers.

Interesting to hear about this approach. I tried to deploy LVS around three years ago, and it seemed like a huge unstable mess, and at that time it made sense to go ahead and purchase the Foundry. It's good to hear that you have it running successfully in a production environment.

My question though, is how would you set up redundant LVS directors? Could you offer a simple schematic?

And how often does your rsync run to sync the web servers to the master server? Seems with the amount of clients we have FTPing things up and down, that this would be a big problem if rsyncs were anything other than immediate. (A lot of our clients are web developers who do the whole "upload-test-debug-repeat" development cycle with PHP, and if they have to wait 5 minutes after each upload for the files to rsync to the web servers, then they're unhappy customers. And you know what they say about unhappy customers... :)

Your thoughts?
Eric



Reply to: