Re: performance of AAC-RAID (ICP9087MA)
On Tue, Sep 12, 2006 at 11:50:46AM +0200, Raimund Jacob wrote:
> Erik Mouw wrote:
> Hello! And thanks for your suggestions.
> > On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
> >> Checking out a largish CVS module is no fun. The data is retrieved via
> >> cvs pserver from the file server and written back via NFS into my home
> >> directory. This process is sometimes pretty quick and sometimes blocks
> >> in between as if the RAID controller has to think about the requests. I
> >> know this phenomenon only from a megaraid controller, which we
> >> eventuelly canned for a pure linux software raid (2 disks mirror). Also,
> >> compiling in the nfs-mounted home directory is too slow - even on a
> >> 1000Mbit link.
> > Try with a different IO scheduler. You probably have the anticipatory
> > scheduler, you want to give the cfq scheduler a try.
> > echo cfq > /sys/block/[device]/queue/scheduler
> > For NFS, you also want to increase the number of daemons. Put the line
> > RPCNFSDCOUNT=32
> > in /etc/default/nfs-kernel-server .
> Thanks for these hints. In the meantime I was also reading up the
> NFS-HOWTO on the performance subject. Playing around with the
> rsize/wsize did not turn up much - seems they dont really matter in my case.
In my case it did matter: setting them to 4k (ie: CPU pagesize)
> My largish CVS-module checks out (cvs up -dP actually) in about 1s when
> I do it locally on the server machine. It also takes about 1s when I
> check it out on a remote machine but on a local disk. On the same remote
> machine via NFS it takes about 30s. So NFS is actually the problem here,
> not the ICP.
One of the main problems with remote CVS is that it uses /tmp on the
server. Make sure that is a fast and large disk as well, or tell CVS to
use another (fast) directory as scratch space.
> Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
> shows a 'bo' of about 1MB during the second it takes. During the
> checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
> my assumption is that lots of fs metadata get updated during that 30s
> (files dont actually change) and due to the sync nature of the mount
> everything is committed to disk pretty hard (ext3) - and that is what
> I'm waiting for.
Mounting filesystems with -o noatime,nodiratime makes quite a
If you're using ext3 with lots of files in a single directory, make
sure you're using htree directory indexing. To see if it is enabled:
Look for the "features" line, if it has dir_index, it is enabled. If
not, enable it with (can be done on a mounted filesystem):
tune2fs -O dir_index /dev/whatever
Now all new directories will be created with a directory index. If you
want to enable it on all directories, unmount the filesystem and run
e2fsck on it:
e2fsck -f -y -D /dev/whatever
Increasing the journal size can also make a difference, or try putting
the journal on a separate device (quite invasive, make sure you have a
backup). See tune2fs(8).
> Here is what I will try next (when people leave the office):
> - Mount the exported fs as data=journal - the NFS-HOWTO says this might
> improve things. I hope this works with remount since reboot is not an
I don't think it makes a difference, I'd rather say it makes things
worse cause it forces all *data* (and not only metadata) through the
> - Try an async nfs export - There is an UPS on the server anyway.
async makes it indeed faster.
> - Try the cfq scheduler and even more increased RPCNFSDCOUNT thing (I
> have 12 already on an UP machine). Due to my observations I dont expect
> much here but it's worth a try.
It did make a difference over here, that's why I increased it to 32.
> Anyone thinks one of those is a bad idea? :)
I only think data=journal is a bad idea.
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands