[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: performance of AAC-RAID (ICP9087MA)



On Tue, Sep 12, 2006 at 02:17:48PM +0200, Erik Mouw wrote:
> On Tue, Sep 12, 2006 at 11:50:46AM +0200, Raimund Jacob wrote:
> > Erik Mouw wrote:
> > 
> > Hello! And thanks for your suggestions.
> > 
> > > On Fri, Sep 08, 2006 at 05:08:52PM +0200, Raimund Jacob wrote:
> > >> Checking out a largish CVS module is no fun. The data is retrieved via
> > >> cvs pserver from the file server and written back via NFS into my home
> > >> directory. This process is sometimes pretty quick and sometimes blocks
> > >> in between as if the RAID controller has to think about the requests. I
> > >> know this phenomenon only from a megaraid controller, which we
> > >> eventuelly canned for a pure linux software raid (2 disks mirror). Also,
> > >> compiling in the nfs-mounted home directory is too slow - even on a
> > >> 1000Mbit link.
> > 
> > > Try with a different IO scheduler. You probably have the anticipatory
> > > scheduler, you want to give the cfq scheduler a try.
> > > 
> > >   echo cfq > /sys/block/[device]/queue/scheduler
> > > 
> > > For NFS, you also want to increase the number of daemons. Put the line
> > > 
> > >   RPCNFSDCOUNT=32
> > > 
> > > in /etc/default/nfs-kernel-server .
> > 
> > Thanks for these hints. In the meantime I was also reading up the
> > NFS-HOWTO on the performance subject. Playing around with the
> > rsize/wsize did not turn up much - seems they dont really matter in my case.
> 
> In my case it did matter: setting them to 4k (ie: CPU pagesize)
> increased throughput.
> 
> > My largish CVS-module checks out (cvs up -dP actually) in about 1s when
> > I do it locally on the server machine. It also takes about 1s when I
> > check it out on a remote machine but on a local disk. On the same remote
> > machine via NFS it takes about 30s. So NFS is actually the problem here,
> > not the ICP.
> 
> One of the main problems with remote CVS is that it uses /tmp on the
> server. Make sure that is a fast and large disk as well, or tell CVS to
> use another (fast) directory as scratch space.
> 
> > Furthermore I observed this: I ran 'vmstat 1'. Checking out locally
> > shows a 'bo' of about 1MB during the second it takes. During the
> > checkount via NFS there is a sustained 2 to 3 MB 'bo' on the server. So
> > my assumption is that lots of fs metadata get updated during that 30s
> > (files dont actually change) and due to the sync nature of the mount
> > everything is committed to disk pretty hard (ext3) - and that is what
> > I'm waiting for.
> 
> Mounting filesystems with -o noatime,nodiratime makes quite a
> difference.
> 
> If you're using ext3 with lots of files in a single directory, make
> sure you're using htree directory indexing. To see if it is enabled:
> 
>   dumpe2fs /dev/whatever
> 
> Look for the "features" line, if it has dir_index, it is enabled. If
> not, enable it with (can be done on a mounted filesystem):
> 
>   tune2fs -O dir_index /dev/whatever
> 
> Now all new directories will be created with a directory index. If you
> want to enable it on all directories, unmount the filesystem and run
> e2fsck on it:
> 
>   e2fsck -f -y -D /dev/whatever
> 
> Increasing the journal size can also make a difference, or try putting
> the journal on a separate device (quite invasive, make sure you have a
> backup). See tune2fs(8).

These are all good suggestions for speedups, especially this last one,
but I would think that none of this should really be necessary unless your
load is remarkably high, not just one user doing a cvs check out.  I would
strace the cvs checkout with timestamps and see where it is waiting.
It seems to me like this has more to do with some configuration snafu
than any of this stuff.

Why are you trying to configure it this way anyway?  Just use the
standard client/server configuration.  You'll probably be glad you did.
And it seems to work a lot faster that way anyway ~:^)

> > Here is what I will try next (when people leave the office):
> > 
> > - Mount the exported fs as data=journal - the NFS-HOWTO says this might
> > improve things. I hope this works with remount since reboot is not an
> > option.

I personally would NOT do this.  There is a good reason why none of the
top performing journaling file systems journal data by default.

> I don't think it makes a difference, I'd rather say it makes things
> worse cause it forces all *data* (and not only metadata) through the
> journal.

Eggxacly.

a



Reply to: