[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: hardware/optimizations for a download-webserver

On Fri, Jul 16, 2004 at 08:53:21PM +0200, Henrik Heil wrote:
> Hello,
> please excuse my general questions.
> A customer asked me to setup a dedicated webserver that will offer ~30 
> files (each ~5MB) for download and is expected to receive a lot of 
> traffic. Most of the users will have cable modems and their download 
> speed should not drop below 50KB/sec.
> My questions are:
> What would be an adequate hardware to handle i.e. 50(average)/150(peak) 
> concurrent downloads?
> What is the typical bottleneck in this setup?
> What optimizations should i apply to a standard woody or sarge 
> installation? (anything kernelwise?)

As long as we're not talking about 486-class machines, the processor is not
going to be the bottleneck; the bandwidth is. Multiplying 150 peak users by
50kB/s gives 7.5MB/s, so your disks should be able to spit out at least
5MB/s. You should also make sure you have plenty of RAM (at least 512MB) to
make sure you can cache as much of the files in RAM as possible.
> I have experiences with not so specialized servers (apache1.x/php4.x 
> hosting on debian/woody/sarge) but never really hit any limits with these.
> I thought about:
> - tuning apache (oviously) -- raising Max/MinSpareServers, AllowOverride 
> none, FollowSymLinks,...

StartServers and SpareServers are probably going to be the most important
options to tweak. You should experiment, but you probably should start up
at least 20 servers, and keep the number of spare servers above five, but
you'll have to experiment with it while in production to see what works

You might also get some performance boost by turning off all the
unnecessary modules like mod_php and mod_perl if you don't need them.

> - putting the files on a ramdisk or using mod_mmap_static (only ~600MB 
> alltogether)

You could try putting everything in a RAM disk, but if it's relatively
static content and you have plenty of RAM the kernel will eventually cache
everything in RAM anyways.
> - replacing apache with fnord (http://www.fefe.de/fnord/) or cthulhu 
> (http://cthulhu.fnord.at/). Can anyone share experiences with these?

This might help, but these might have their own configuration problems. If
you're more familiar with Apache, you'll probably have an easier time
tweaking it than something unfamiliar.
> - (as a last resort) using 2 loadbalancing servers with lvs 
> (http://www.linuxvirtualserver.org/).

This might help, but it'll add another layer of complexity that could fail.
I'd build one good machine than two less-good machines.

-- Skylar Thompson (skylar@cs.earlham.edu)
-- http://www.cs.earlham.edu/~skylar/

Attachment: pgpz25ZPO9B2g.pgp
Description: PGP signature

Reply to: