[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Fwd: Multiple servers for 1 domain name?



On Mon, 2003-02-10 at 16:28, Jason Lim wrote:
Hi All,

I was wondering if you guys are aware of any solution for multiple servers
to server 1 domain name?

That is... like those big ISPs that have "user" webhosting.

http://members.isp.com/joe/ (goes to server no. 5)
http://members.isp.com/jane/ (goes to server no. 3)
http://members.isp.com/someone/ (goes to server no. 2)
[...]
Basically the reason for doing this is because the existing single server
is overloaded, and need to split the workload to 1 or 2 more servers.

Some of the problem might be solved by moving the database to a
dedicated machine. If that doesn't spread the load enough, doing a DNS
round-robin (having member.isp.com resolve to two different machines
with exactly the same setup) might solve the problem. The web
directories etc. (especially writeable areas for cgi programs) would
need to be shared, with NFS for example (which might create some locking
problems so you'd need to be careful...).

Of course, this causes requests going to a random server - so PHPs
session tracking ability etc. will have problems. If you use such
things, investigating just upgrading the server first could already
bring relief (where is the performance problem exactly? Disk? CPU?
Memory?)

If you start implementing the server farm architecture, and have the problem of PHP sessions, SSL sessions, etc... Then you may want to invest in a hardware web switch, like an F5 or Foundry. Yep, they're expensive, but super fast and they will hold a particular user's session to a single server for the duration of that session, fixing the problems of the PHP/SSL issues.

Also, they have the added feature of being able to check heartbeat/pings of each server in the cluster. A particular server goes down, and the switch will automatically redirect requests to other servers, until that broken server comes back online. DNS round robin will continue to resolve to a broken server, thus making your uptime availability = numgoodservers / totalservers. Not a good thing if you're running two servers, and one goes bad. Instant 50% availability. Furthermore, you can assign weights to each server, so if you have some old system that you still want to be in the cluster, you can add them with a lower weight, and they'll get hit less than the brand new Dells you just bought (<- that one's for you Russell Coker. :)

I'm currently implementing this setup for our web hosting company, and so far it's working great. NFS/MySQL on a big RAID-5 (3Ware) file server, with dedicated 100baseT lines to multiple app servers, which are all behind a Foundry ServerIronXL. Works like a champ. Even better if you have your applications using some sort of directory structure for authentication, like LDAP or MySQL, (such as vpopmail, proftpd, apache auth, etc.) If I need additional throughput, then I'll probably upgrade to Gigabit ethernet between the app servers and the file server, and add more app servers. Also, eventually I'll be adding a redundant file server that mirrors the main one..

Hope that helps-
Eric



Reply to: