[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: high performance, highly available web clusters



On Wed, May 19, 2004 at 11:48:31PM -0600, David Wilk wrote:
...
> The cluster is comprised of a load-balancer, several web servers
> connected to a redundant pair of NFS servers and a redundant pair of
> MySQL servers.  The current bottle-neck is, of course, the NFS servers.
> However, the entire thing needs an increase in capacity by several
> times.
...
> The expensive option would be to add a high-performance SAN which would
> do the trick for all of the servers that required high-performance
> shared storage.  this would solve the NFS performance problems.
> 
> However, for alot less money, one could simply do away with the file
> server entirely.   Since this is static content, one could keep these
> files locally on the webservers and push the content out from a central
> server via rsync.  I figure a pair of redundant internal web server
> 'staging servers' could be used for content update.  Once tested, the
> update could be pushed to the production servers with a script using
> rsync and ssh.  Each server, would of course, require fast and redundant
> disk subsystems.
> 
> I think the lowest cost option is to increase the number of image
> servers, beef up the NFS servers and MySQL servers and add to the number
> of web servers in the cluster.  This doesn't really solve the design
> problem, though.
> 

Personally, I can't see the sense in replacing a set of NFS servers with
individual disks. While you might save money going with local disks in
the short run your maintenance costs (moreso the time cost than dollar
cost) would increase accordingly. Just dealing with lots of extra moving
parts puts a shiver down my spine. 

I'm not sure how your 'static content' fits in with your mentioning
multiple MySQL servers, that seems dynamic to me - or at least, ability
for much dynamic content. 

If you ARE serving up a lot of static content, I might recommend a
situation that's similar to a project I worked on for a $FAMOUSAUTHOR
where we designed multiple web servers behind a pair of L4 switches. The
pair of switches (pair for redundancy) load balanced for us and we ran
THTTPD on the servers. There were a few links to offsite content, where
content hosting providers (cannot remember the first, but they later
went with Akamai) offered up the larger file people came to download.
Over the millions of hits we got, it survived quite nicely. We ran out
of bandwidth (50Mb/s) before the servers even blinked. 

Perhaps if it IS static you might also consider loading your content
into a RAMdisk, which would provide probably the fastest access time. I
might consider such a thing these days with the dirt cheap pricing of
RAM. 

I think some kind of common disk (NFS, whatever, on RAID) is your
best solution. 

HTH

j
-- 

==================================================
+ It's simply not       | John Keimel            +
+ RFC1149 compliant!    | john@keimel.com        +
+                       | http://www.keimel.com  +
==================================================



Reply to: