Re: performance of AAC-RAID (ICP9087MA) - NFS actually
Andrew Sharp wrote:
> On Tue, Sep 12, 2006 at 02:17:48PM +0200, Erik Mouw wrote:
>> On Tue, Sep 12, 2006 at 11:50:46AM +0200, Raimund Jacob wrote:
>>> My largish CVS-module checks out (cvs up -dP actually) in about 1s when
>>> I do it locally on the server machine. It also takes about 1s when I
>>> check it out on a remote machine but on a local disk. On the same remote
>>> machine via NFS it takes about 30s. So NFS is actually the problem here,
>>> not the ICP.
>> One of the main problems with remote CVS is that it uses /tmp on the
>> server. Make sure that is a fast and large disk as well, or tell CVS to
>> use another (fast) directory as scratch space.
ok. /tmp is fast enough. it's only NFS performance i have problems with.
raid -> cvs pserver -> local disk is pretty fast.
>>> Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
>>> shows a 'bo' of about 1MB during the second it takes. During the
>>> checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
>>> my assumption is that lots of fs metadata get updated during that 30s
>>> (files dont actually change) and due to the sync nature of the mount
>>> everything is committed to disk pretty hard (ext3) - and that is what
>>> I'm waiting for.
>> Mounting filesystems with -o noatime,nodiratime makes quite a
yeah, i guess so. but on this fs i want to keep my atimes.
>> If you're using ext3 with lots of files in a single directory, make
>> sure you're using htree directory indexing. To see if it is enabled:
>> dumpe2fs /dev/whatever
>> Look for the "features" line, if it has dir_index, it is enabled. If
>> not, enable it with (can be done on a mounted filesystem):
>> tune2fs -O dir_index /dev/whatever
>> Now all new directories will be created with a directory index. If you
>> want to enable it on all directories, unmount the filesystem and run
>> e2fsck on it:
>> e2fsck -f -y -D /dev/whatever
that's a nice hint. i'll do that next time i reboot (not anytime soon :).
>> Increasing the journal size can also make a difference, or try putting
>> the journal on a separate device (quite invasive, make sure you have a
>> backup). See tune2fs(8).
> These are all good suggestions for speedups, especially this last one,
> but I would think that none of this should really be necessary unless your
> load is remarkably high, not just one user doing a cvs check out. I would
> strace the cvs checkout with timestamps and see where it is waiting.
> It seems to me like this has more to do with some configuration snafu
> than any of this stuff.
as i described it's all NFS fault.
> Why are you trying to configure it this way anyway? Just use the
> standard client/server configuration. You'll probably be glad you did.
> And it seems to work a lot faster that way anyway ~:^)
well, shared /home among multiple unix workstations is not that uncommon.
>>> Here is what I will try next (when people leave the office):
>>> - Mount the exported fs as data=journal - the NFS-HOWTO says this might
>>> improve things. I hope this works with remount since reboot is not an
> I personally would NOT do this. There is a good reason why none of the
> top performing journaling file systems journal data by default.
>> I don't think it makes a difference, I'd rather say it makes things
>> worse cause it forces all *data* (and not only metadata) through the
ok, i see. i found that in the NFS-HOWTO and probably got it wrong. the
manpage was really enlightning either. your comments make sense so i
wont even try.
so, after having another look at our UPS and at the other folks i
decided to just async-export the fs and - of course - that solved all
problems. the very same cvs checkout takes 2 to 4 seconds now (not 30).
i twiggled the rsize/wsize but that's below noise, seems our LAN is as
good as it gets.
bottom line: the ICP behaves as it should was far as one can notice.
performance problems were due to NFS and were solved by exporting async.
local fs optimizations on the server and NFS client options are still to
be tuned but everything is to acceptable speed now.
thanks for all the suggestions, i learned something in this thread.
Die Lösung für effizientes Kundenbeziehungsmanagement.
Jetzt informieren: http://www.universal-messenger.de
Pinuts media+science GmbH http://www.pinuts.de
Dipl.-Inform. Raimund Jacob Raimund.Jacob@pinuts.de
Krausenstr. 9-10 voice : +49 30 59 00 90 322
10117 Berlin fax : +49 30 59 00 90 390