[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: hardware/optimizations for a download-webserver



On Sat, 17 Jul 2004 05:42, Skylar Thompson <skylar@cs.earlham.edu> wrote:
> As long as we're not talking about 486-class machines, the processor is not
> going to be the bottleneck; the bandwidth is. Multiplying 150 peak users by
> 50kB/s gives 7.5MB/s, so your disks should be able to spit out at least
> 5MB/s. You should also make sure you have plenty of RAM (at least 512MB) to
> make sure you can cache as much of the files in RAM as possible.

As long as we are not talking about 486 class hardware then disks can handle 
>5MB/s.  In 1998 I bought the cheapest available Thinkpad with a 3G IDE disk 
and it could do that speed for the first gigabyte of the hard disk.  In 2000 
I bought a newer Thinkpad with a 7.5G IDE disk which could do >6MB/s over the 
entire disk and >9MB/s for the first 3G.  Also in 2000 I bought some cheap 
46G IDE disks which could do >30MB/s for the first 20G and >18MB/s over the 
entire disk.

If you buy one of the cheapest IDE disks available new (IE not stuff that's 
been on the shelf for a few years) and you connect it to an ATA-66 or ATA-100 
bus on the cheapest ATX motherboard available then you should expect to be 
able to do bulk reads at speeds in excess of 40MB/s easily, and probably 
>50MB/s for some parts of the disk.  I haven't had a chance to benchmark any 
of the 10,000rpm S-ATA disks, but I would hope that they could sustain bulk 
read speeds of 70MB/s or more.

The next issue is seek performance.  Getting large transfer rates when reading 
large amounts of data sequentially is easy.  Getting large transfer rates 
while reading smaller amounts of data is more difficult.  Hypothetically 
speaking if you wanted to read data in 1K blocks without any caching and it 
was not in order then you would probably find it difficult to sustain more 
than about 2MB/s on a RAID array.  Fortunately modern hard disks have 
firmware that implements read-ahead (the last time I was purchasing hard 
disks the model with 8M of read-ahead buffer was about $2 more than one with 
2M of read-ahead buffer).  When you write files to disk the OS will try to 
keep them contiguous as much as possible, to the read-ahead in the drive may 
help if the OS doesn't do decent caching.  However Linux does really 
aggressive caching of both meta-data and file data, and Apache should be 
doing reads with significantly larger block sizes than 1K.


I expect that if you get a P3-800 class machine with a 20G IDE disk and RAM 
that's more than twice the size of the data that's to be served (easy when 
it's only 150M of data) then there will not be any performance problems.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Reply to: