[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Nbd] Performance numbers?



Hi!

> > Does anyone have any performance numbers on nbd vs things like nfs,  
> > san vendor offerings, etc?
> 
> Linux magazine did an article about netbooting diskless clients a few
> years ago, and that has some comparative numbers between NFS, NBD, and a
> bunch of other variants of NBD. I'm offline right now and don't have the
> URL with me, but there's a link on <http://nbd.sf.net/>. Other than
> that, I'm not aware of any performance comparisons; but my gut feeling
> is that NBD is probably faster, since the protocol is so extremely
> simple and low overhead.

I was the one who did some tests for that article :) But the setup was quite
special: Running NFS-RO vs. NBD/SquashFS (RO because of SquashFS). The
NBD/SquashFS produced ~1/2 - 2/3 of traffic compared to NFS (simple machine
bootups) and was few seconds faster in startup. Later on another student wrote
a DNBD2 (special RO blockdevice using UDP) which offers failover and load
balancing and was faster on Gigabit links. For some reason we were not able to
saturate a gigabit ethernet with NBD (never investigated too deeply into that
one). For some tests you might want to have a glance at:

  http://www.ks.uni-freiburg.de/download/bachelorarbeit/SS07/06-07-dnbd2-dileo

If you would like to tryout the DNBD2 I can find the link to a patched version
for newer kernels.
 
> > How about information on failover of an nbd 'server' from one machine  
> > to another?
> 
> Why the quotes?
> 
> I'm not entirely sure on what you mean with failover. If you mean having
> an NBD client connected to one machine automatically reconnecting to
> another when its server goes down, then there currently is no support
> for that. A while back there was talk about having the kernel module
> block access to the block device until the client exits; I don't know
> what happened to that, but if it got merged, then it should be
> reasonably straightforward to implement failover as part of nbd-client.

We have it here for read-only 
  (DNBD2 -> http://openslx.org/trac/de/openslx/browser/contrib/dnbd2)

> Having said that, it is perfectly possible to create a failover cluster
> with RAID-over-NBD and heartbeat. To do that, you'd run a software RAID1
> over a local block device on the one hand and an NBD device on the
> other; the other machine would run nbd-server, and you'd switch places
> when heartbeat tells us that the machine running the RAID1 is dead.

The DNBD2 is able to monitor a number of different servers and choose the
fastest one for service. Quite nice in larger setups.

Good night! Dirk



Reply to: