[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Finding the Bottleneck



BTW, I think I noticed something as well (before and after the below
optimization).

sh-2.05# qmail-qstat
messages in queue: 17957
messages in queue but not yet preprocessed: 1229

With everything queue running off one hard disk (disk 1), I never noticed
such a few emails not even being able to be preprocessed. It seems the
system's ability to preprocess the messages has declined since putting the
queue on disk 2.

I don't see any reason why... but anyway, facts are facts :-/

Sincerely,
Jason

----- Original Message -----
From: "Jason Lim" <maillist@jasonlim.com>
To: <debian-isp@lists.debian.org>
Cc: "Russell Coker" <russell@coker.com.au>
Sent: Friday, June 08, 2001 10:14 PM
Subject: Re: Finding the Bottleneck


> I agree with you that splitting the mail queue to another server
wouldn't
> help, especially since you've seen the top results, and know that it
isn't
> very heavily loaded with other jobs in the first place. So I think you
are
> very correct in saying that the hard disk is the limit here.
>
> Today I played around with hdparm to see if I could tweak some
additional
> performance out of the existing drives, and it helped by about 10% (not
a
> huge jump, but anything helps!).
>
> Specifically, I set /sbin/hdparm -a4 -c3 -d1 -m16 -u1 /dev/hdc:
>
>        -a     Get/set sector  count  for  filesystem  read-ahead.
>               This  is  used to improve performance in sequential
>               reads of large  files,  by  prefetching  additional
>               blocks  in anticipation of them being needed by the
>               running  task.   In  the  current  kernel   version
>               (2.0.10)  this  has  a default setting of 8 sectors
>               (4KB).  This value seems good  for  most  purposes,
>               but in a system where most file accesses are random
>               seeks, a smaller setting might provide better  per­
>               formance.   Also, many IDE drives also have a sepa­
>               rate built-in read-ahead function, which alleviates
>               the need for a filesystem read-ahead in many situa­
>               tions.
> (Since most the emails are small and randomly placed around, I thought
> maybe 2KB read-ahead might make more sense. Tell me if I'm wrong...
> because the performance jump may not be due to this setting)
>
>        -c     Query/enable  (E)IDE 32-bit I/O support.  A numeric
>               parameter can be used to enable/disable 32-bit  I/O
>               support:  Currently  supported  values include 0 to
>               disable 32-bit I/O support, 1 to enable 32-bit data
>               transfers,  and  3  to enable 32-bit data transfers
>               with a  special  sync  sequence  required  by  many
>               chipsets.  The value 3 works with nearly all 32-bit
>               IDE chipsets, but incurs  slightly  more  overhead.
>               Note  that "32-bit" refers to data transfers across
>               a PCI or VLB bus to the interface  card  only;  all
>               (E)IDE  drives  still have only a 16-bit connection
>               over the ribbon cable from the interface card.
>
> (Couldn't hurt to have it going 32 bit rather than 16 bit)
>
>        -d     Disable/enable the "using_dma" flag for this drive.
>               This  option  only works with a few combinations of
>               drives and interfaces which support DMA  and  which
>               are known to the IDE driver (and with all supported
>               XT interfaces).  In particular,  the  Intel  Triton
>               chipset is supported for bus-mastered DMA operation
>               with many drives (experimental).  It is also a good
>               idea to use the -X34 option in combination with -d1
>               to ensure that the drive itself is  programmed  for
>               multiword  DMA mode2.  Using DMA does not necessar­
>               ily provide any improvement in throughput or system
>               performance,  but  many  folks  swear  by it.  Your
>               mileage may vary.
> (this is a dma100 7200 drive so setting this couldn't hurt either.
Didn't
> see much performance increase with this though)
>
>        -m     Get/set sector count for multiple sector I/O on the
>               drive.  A setting of 0 disables this feature.  Mul­
>               tiple  sector  mode (aka IDE Block Mode), is a fea­
>               ture of most modern IDE hard drives, permitting the
>               transfer  of  multiple  sectors  per I/O interrupt,
>               rather than the usual  one  sector  per  interrupt.
>               When  this feature is enabled, it typically reduces
>               operating system overhead for disk I/O  by  30-50%.
>               On  many  systems,  it also provides increased data
>               throughput  of  anywhere  from  5%  to  50%.   Some
>               drives,   however   (most  notably  the  WD  Caviar
>
>               series), seem to  run  slower  with  multiple  mode
>               enabled.   Your mileage may vary.  Most drives sup­
>               port the minimum settings of 2, 4, 8, or  16  (sec­
>               tors).   Larger  settings  may  also  be  possible,
>               depending on the drive.  A  setting  of  16  or  32
>               seems  optimal  on  many  systems.  Western Digital
>               recommends lower settings of 4  to  8  on  many  of
>               their  drives,  due  tiny  (32kB) drive buffers and
>               non-optimized buffering algorithms.   The  -i  flag
>               can  be  used to find the maximum setting supported
>               by an installed drive (look for MaxMultSect in  the
>               output).   Some  drives  claim  to support multiple
>               mode, but lose data at some settings.   Under  rare
>               circumstances,  such failures can result in massive
>               filesystem corruption.
> (I set it to 16... do you think 32 would make more sense?)
>
>        -u     Get/set interrupt-unmask flag  for  the  drive.   A
>               setting  of  1  permits  the driver to unmask other
>               interrupts during processing of a  disk  interrupt,
>               which  greatly  improves Linux's responsiveness and
>               eliminates "serial port overrun" errors.  Use  this
>               feature  with caution: some drive/controller combi­
>               nations do not tolerate the increased I/O latencies
>               possible when this feature is enabled, resulting in
>               massive  filesystem  corruption.   In   particular,
>               CMD-640B  and RZ1000 (E)IDE interfaces can be unre­
>               liable (due to a hardware flaw) when this option is
>               used  with  kernel  versions  earlier  than 2.0.13.
>               Disabling the IDE prefetch feature of these  inter­
>               faces (usually a BIOS/CMOS setting) provides a safe
>               fix for the problem for use with earlier kernels.
> (this seem to have the largest performance boost)
>
> Anyway... there it is. Maybe someone else could use these results to get
a
> free 10% increase as well. I stupidly set write_cache on as well, which
> ended up trashing a bunch of stuff. Thank goodness at that time the
server
> was not being used, and I immediately rebuilt the mail queue.
>
> Does anyone have any better configs than above, or some utility that
could
> further boost performance?
>
> Sincerely,
> Jason
>
> ----- Original Message -----
> From: "Russell Coker" <russell@coker.com.au>
> To: "Jason Lim" <maillist@jasonlim.com>; "Brian May"
> <bam@snoopy.apana.org.au>
> Cc: <debian-isp@lists.debian.org>
> Sent: Friday, June 08, 2001 7:17 PM
> Subject: Re: Finding the Bottleneck
>
>
> On Friday 08 June 2001 12:25, Jason Lim wrote:
> > The network is connected via 100Mb to a switch, so server to server
> > connections would be at that limit. Even 10Mb wouldn't be a problem as
> > I don't think that much data would be crossing the cable.. would it?
>
> 10Mb shouldn't be a problem for DNS.  Of course there's the issue of
what
> else is on the same cable.
>
> There will of course be a few extra milli-seconds latency, but you are
> correct that it shouldn't make a difference.
>
> > As for the "single machine" issue, that would depend. If you're
talking
> > about either getting a couple of SCSI disks, putting them on a
hardware
> > raid, or getting an additional small server just for the queue, then I
> > think the cost would end up approximately the same. This client
doesn't
> > have the cash for a huge upgrade, but something moderate would be
okay.
>
> However getting an extra server will not make things faster, in fact it
> will probably make things slower (maybe a lot slower).  Faster hard
> drives is what you need!
>
> > BTW, just to clarify for people who are not familar with qmail, qmail
> > stores outgoing email in a special queue, not in Maildir. Only
incoming
> > mail is stored in Maildir. The Maildirs are actually stored on Disk 1
> > (along with the operating system and everything else except the
queue).
> > I know Maildir can be put in a NFS disk... BUT i've never heard of
> > anyone putting the mail queue on NFS, so I'm not sure if the file
> > locking issues you mention would pertain to that as well.
>
> For the queue, Qmail creates file names that match Inode numbers (NFS
> doesn't have Inodes).  Qmail also relies on certain link operations
being
> atomic and reliable, while on NFS they aren't guaranteed to be atomic,
> and packet loss can cause big reliability problems.
>
> Consider "ln file file2", when an NFS packet is sent to the server the
> server will create the link and return success, if the return packet is
> lost due to packet corruption then the client will re-send the request.
> The server will notice that file2 exists and return an error message.
> The result is that the operation succeeded but the client thinks it
> failed!
>
> There are many other issues with NFS for this type of thing.  NFS is
only
> good for data that has simple access patterns (read-only files and
simple
> operations like mounting a home directory and editing a file with "vi"),
> and for applications which have carefully been written to work with NFS
> (Maildir programs).
>
> --
> http://www.coker.com.au/bonnie++/     Bonnie++ hard drive benchmark
> http://www.coker.com.au/postal/       Postal SMTP/POP benchmark
> http://www.coker.com.au/projects.html Projects I am working on
> http://www.coker.com.au/~russell/     My home page
>
>
> --
> To UNSUBSCRIBE, email to debian-isp-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmaster@lists.debian.org
>
>
>
>
> --
> To UNSUBSCRIBE, email to debian-isp-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmaster@lists.debian.org
>
>



Reply to: