[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: performance of AAC-RAID (ICP9087MA)



Erik Mouw wrote:

Hello! And thanks for your suggestions.

> On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
>> Checking out a largish CVS module is no fun. The data is retrieved via
>> cvs pserver from the file server and written back via NFS into my home
>> directory. This process is sometimes pretty quick and sometimes blocks
>> in between as if the RAID controller has to think about the requests. I
>> know this phenomenon only from a megaraid controller, which we
>> eventuelly canned for a pure linux software raid (2 disks mirror). Also,
>> compiling in the nfs-mounted home directory is too slow - even on a
>> 1000Mbit link.

> Try with a different IO scheduler. You probably have the anticipatory
> scheduler, you want to give the cfq scheduler a try.
> 
>   echo cfq > /sys/block/[device]/queue/scheduler
> 
> For NFS, you also want to increase the number of daemons. Put the line
> 
>   RPCNFSDCOUNT=32
> 
> in /etc/default/nfs-kernel-server .

Thanks for these hints. In the meantime I was also reading up the
NFS-HOWTO on the performance subject. Playing around with the
rsize/wsize did not turn up much - seems they dont really matter in my case.

My largish CVS-module checks out (cvs up -dP actually) in about 1s when
I do it locally on the server machine. It also takes about 1s when I
check it out on a remote machine but on a local disk. On the same remote
machine via NFS it takes about 30s. So NFS is actually the problem here,
not the ICP.

Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
shows a 'bo' of about 1MB during the second it takes. During the
checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
my assumption is that lots of fs metadata get updated during that 30s
(files dont actually change) and due to the sync nature of the mount
everything is committed to disk pretty hard (ext3) - and that is what
I'm waiting for.

Here is what I will try next (when people leave the office):

- Mount the exported fs as data=journal - the NFS-HOWTO says this might
improve things. I hope this works with remount since reboot is not an
option.

- Try an async nfs export - There is an UPS on the server anyway.

- Try the cfq scheduler and even more increased RPCNFSDCOUNT thing (I
have 12 already on an UP machine). Due to my observations I dont expect
much here but it's worth a try.

Anyone thinks one of those is a bad idea? :)

	Raimund

-- 
Die Lösung für effizientes Kundenbeziehungsmanagement.
Jetzt informieren: http://www.universal-messenger.de

Pinuts media+science GmbH                 http://www.pinuts.de
Dipl.-Inform. Raimund Jacob               Raimund.Jacob@pinuts.de
Krausenstr. 9-10                          voice : +49 30 59 00 90 322
10117 Berlin                              fax   : +49 30 59 00 90 390
Germany



Reply to: