[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Finding the Bottleneck



On Wednesday 06 June 2001 10:51, Jason Lim wrote:
> Just so you know, this server is an:
> AMD K6-2 500Mhz, 128M-133Mhz, 2 UDMA100 drives (IBM), 10M bandwidth.

How much swap is being used?  If you have any serious amount of mail being 
delivered then having a mere 128M of RAM will seriously hurt performance!  
RAM is also cheap and easy to upgrade...

> mainly for the mailing lists. The 2 hard disks are on 2 different IDE
> channels, as putting both disks on the same cable would drastically reduce
> performance of both disks.

In my tests so far I have not been able to show drastic performance 
difference.  I have shown about a 20% performance benefit for using separate 
cables...

> The way it is organized is that the mail spool/queue is on the 2nd disk,
> while the OS and programs are on disk 1. Logging is also performed on disk
> 1, so that writing to the mail log won't interfere with the mail queue (as
> they commonly both occur simultaneously).

Where is swap?


Take note of the following paragraph from syslog.conf(5):
       You  may  prefix  each  entry with the minus ``-'' sign to
       omit syncing the file after every logging.  Note that  you
       might  lose information if the system crashes right behind
       a write attempt.  Nevertheless this might  give  you  back
       some  performance, especially if you run programs that use
       logging in a very verbose manner.

Do that for all logs apart from kern.log!  Then syslogd will hardly use any 
disk access.

> From MY understanding, the "load average" shows how many programs are
> running, and not really how "stressed" the CPU is. I'm not sure exactly
> sure how this works (please correct me if i'm wrong) but 1 program taking
> 80% CPU might have load average of 2, while 100 programs taking 0.5% each
> would take 50% CPU and have load average of 8. Is that correct thinking?

Not.

1 program taking up all CPU time will give a load average of 1.00.  1 program 
being blocked on disk IO (EG reading from a floppy disk) will give a load 
average of 1.00.  Two programs blocked on disk IO to different disks and a 
third program that's doing a lot of CPU usage will result in load average of 
3.00 while the machine is running as efficiently as it can.

Load average isn't a very good way of measuring system use!

> We don't use NFS on this server. NFS on linux, as you said, is pretty
> crummy and should be avoided if possible. We simply put the mail queue on
> a seperate hard disk.

Actually if you have the latest patches then NFS should be quite solid.


Now firstly the OS and the syslog will not use the disk much at all if you 
have enough RAM that the machine doesn't swap and has some spare memory for 
caching.  Boost the machine to 256M.  Don't bother with DDR RAM as it won't 
gain you anything, get 384 or 512M if you can afford it.

Next the most important thing for local mail delivery is to have the queue on 
a separate disk to the spool.  Queue and spool writes are independant and the 
data is immidiately sync'd.  Having them on separate disks can provide 
serious performance benefits.

Also if your data is at all important to you then you should be using RAID.  
Software RAID-1 in the 2.4.x kernels and with the patch for 2.2.x kernels is 
very solid.  I suggest getting 4 drives and running two RAID-1 sets, one for 
the OS and queue, the other for the spool.  RAID-1 will improve read speed as 
the system will be able to execute two read requests from the RAID-1 at the 
same time.

-- 
http://www.coker.com.au/bonnie++/     Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/       Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/     My home page



Reply to: