Re: Finding the Bottleneck
Here is the result of "top":
05:51:18 up 5 days, 22:38, 1 user, load average: 6.60, 7.40, 6.51
119 processes: 106 sleeping, 11 running, 2 zombie, 0 stopped
CPU states: 16.4% user, 18.3% system, 0.0% nice, 65.3% idle
Mem: 128236K total, 124348K used, 3888K free, 72392K buffers
Swap: 289160K total, 0K used, 289160K free, 9356K cached
And of "qmail-qstat":
messages in queue: 108903
messages in queue but not yet preprocessed: 19537
Swap is on Disk 1, because mail queue/spool is on Disk 2.
I also already added the "-" in front of most entries except the emergency
or critical ones (if I didn't do it, the load was way higher just writing
the log files).
Concerning the mail queue and spool being on the same disk, the reason is
that there is virtually no emails incoming, 99.999% outgoing.
About running software raid... I've heard that the CPU usage is increased
dramatically if you use any form of software raid. Is that true?
Actually... i doubt the customer would be willing to pay us to implement
this for him on a hardware level. Good raid cards with good amounts of ram
don't come cheap last time I checked... :-/
----- Original Message -----
From: "Russell Coker" <firstname.lastname@example.org>
To: "Jason Lim" <email@example.com>; <firstname.lastname@example.org>
Sent: Wednesday, June 06, 2001 8:05 PM
Subject: Re: Finding the Bottleneck
On Wednesday 06 June 2001 10:51, Jason Lim wrote:
> Just so you know, this server is an:
> AMD K6-2 500Mhz, 128M-133Mhz, 2 UDMA100 drives (IBM), 10M bandwidth.
How much swap is being used? If you have any serious amount of mail being
delivered then having a mere 128M of RAM will seriously hurt performance!
RAM is also cheap and easy to upgrade...
> mainly for the mailing lists. The 2 hard disks are on 2 different IDE
> channels, as putting both disks on the same cable would drastically
> performance of both disks.
In my tests so far I have not been able to show drastic performance
difference. I have shown about a 20% performance benefit for using
> The way it is organized is that the mail spool/queue is on the 2nd disk,
> while the OS and programs are on disk 1. Logging is also performed on
> 1, so that writing to the mail log won't interfere with the mail queue
> they commonly both occur simultaneously).
Where is swap?
Take note of the following paragraph from syslog.conf(5):
You may prefix each entry with the minus ``-'' sign to
omit syncing the file after every logging. Note that you
might lose information if the system crashes right behind
a write attempt. Nevertheless this might give you back
some performance, especially if you run programs that use
logging in a very verbose manner.
Do that for all logs apart from kern.log! Then syslogd will hardly use
> From MY understanding, the "load average" shows how many programs are
> running, and not really how "stressed" the CPU is. I'm not sure exactly
> sure how this works (please correct me if i'm wrong) but 1 program
> 80% CPU might have load average of 2, while 100 programs taking 0.5%
> would take 50% CPU and have load average of 8. Is that correct thinking?
1 program taking up all CPU time will give a load average of 1.00. 1
being blocked on disk IO (EG reading from a floppy disk) will give a load
average of 1.00. Two programs blocked on disk IO to different disks and a
third program that's doing a lot of CPU usage will result in load average
3.00 while the machine is running as efficiently as it can.
Load average isn't a very good way of measuring system use!
> We don't use NFS on this server. NFS on linux, as you said, is pretty
> crummy and should be avoided if possible. We simply put the mail queue
> a seperate hard disk.
Actually if you have the latest patches then NFS should be quite solid.
Now firstly the OS and the syslog will not use the disk much at all if you
have enough RAM that the machine doesn't swap and has some spare memory
caching. Boost the machine to 256M. Don't bother with DDR RAM as it
gain you anything, get 384 or 512M if you can afford it.
Next the most important thing for local mail delivery is to have the queue
a separate disk to the spool. Queue and spool writes are independant and
data is immidiately sync'd. Having them on separate disks can provide
serious performance benefits.
Also if your data is at all important to you then you should be using
Software RAID-1 in the 2.4.x kernels and with the patch for 2.2.x kernels
very solid. I suggest getting 4 drives and running two RAID-1 sets, one
the OS and queue, the other for the spool. RAID-1 will improve read speed
the system will be able to execute two read requests from the RAID-1 at
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/ Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page
To UNSUBSCRIBE, email to email@example.com
with a subject of "unsubscribe". Trouble? Contact