[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Nbd] [PATCH 1/1] NBD: make nbd default to deadline I/O scheduler



On Feb 8, 2008 1:41 PM, Paul Clements <paul.clements@...124...> wrote:
> Andrew Morton wrote:
> > On Fri, 8 Feb 2008 09:33:41 -0800 Randy Dunlap <randy.dunlap@...57...> wrote:
> >
> >> On Fri, 08 Feb 2008 11:47:42 -0500 Paul Clements wrote:
> >>
> >>> There have been numerous reports of problems with nbd and cfq. Deadline
> >>> gives better performance for nbd, anyway, so let's use it by default.
> >
> > Please define "problems".  If it's just "slowness" then we can live with
> > that, but I'd hope that Jens is aware and that it's understood.
> >
> > It it's "hangs" or "oopses" then we panic.
>
> The two problems I have experienced (which may already be fixed):
>
> 1) nbd hangs with cfq on RHEL 5 (2.6.18) -- this may well have been fixed

It has been fixed in RHEL5 (but not yet released AFAIK):
https://bugzilla.redhat.com/show_bug.cgi?id=241540#c43

On a slightly related performance note; I'm seeing that for NBD
devices both max_hw_sectors_kb  and max_sectors_kb are 127.  If I set
it to be 256 with the following patch I see a 25% improvement in
overall throughput:

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 5def9c5..ed63e2f 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -764,6 +764,7 @@ static int __init nbd_init(void)
                        put_disk(disk);
                        goto out;
                }
+               blk_queue_max_sectors(disk->queue, 512);
        }

        if (register_blkdev(NBD_MAJOR, "nbd")) {

Any chance we can take steps to make NBD not be artificially slow?
Any other recommendations for improving NBD throughput?

regards,
Mike



Reply to: