[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Nbd] nbd for large scale diskless linux



On Fri, Oct 07, 2005 at 11:31:12PM +0200, dsuchod wrote:
> 
> Hi!! I just set up a nbd server and booted a diskless linux client from 
> it. I was quite impressed from the performance! It operated very stable
> (I tried four or five years ago and switched back to nfs) even under huge 
> load (cat /dev/nb0 > /dev/null on the client) - I got a satisfied 100Mbits 
> ethernet link (99Mbits) and a good satisfied gigabit link (300Mbits). The 
> load on the server (Dual AMD64 2GHz, 2GByte Mem) was moderate (15%), the 
> load on the client (IBM X41 centrino laptop with 512MB) no problem at all ...

Great! That would mean the kernel-space lockups have finally been
resolved -- there used to be issues under such high loads.

What kernel were you using on the client?

> In my standard environment I'm using NFSv3 over TCP, but it is rather slow
> with small files. The RPC overhead costs at least time and generates a lot
> of traffic (even on files I just loaded - kernel caching seems not to be
> optimal in conjunction with NFS).
> 
> Now the question: How does nbd-server would scale with 20 up to 100
> clients on one server (no big deal for kernel-nfs).

I haven't tried, but...

nbd-server currently works with a fork-per-client multi-client scheme.
While this works, it's not the best option performance-wise. There
shouldn't be much of an issue if you're using "only" 50 clients (they
all keep their TCP connection open at all times, so there's no danger of
a "thundering herd" problem), but it might not scale all that well.

I'm in the process of improving nbd-server to better cope with
scalability issues, but I have to say that I was a bit set off by the
fact that the kernel couldn't handle a continued throughput until
(apparently) recently...

> I got a lot of trouble with the late user-space nfsd consuming nearly
> 100% of cpu under heavy load, so I might have the same trouble serving
> 50+ clients with nbd-server!?!?

Not likely. An nfsd gets one request per file; every one of those would
seem to require about the same level of processing from nfsd than is the
case for opening an NBD connection.

And once your NBD connection is open, all the server needs to do to
satisfy a request from a client is to
* Read in a TCP package
* Check whether this is a read or a write
* Copy the data from the package to disk, or from disk to a (new)
  network package
* Send a package back with confirmation that the write succeeded, or
  with the data in case of a read

Plus, of course, some error checking.

> Does anyone on this tried to export a blockdev RO to a larger number
> of clients successfully?

If your block device is small enough and your memory large enough this
should not be problematic at all -- then everything will just remain in
cache, and the nbd-servers will serve out rather fast.

Note, however, that nbd-server will not enforce read-only policies --
you need to do that yourself, somehow.

> Next question of interest to me - poor mans high availability: Would it
> be possible to put two equal servers exporting just the same partion with
> the same content (dd image over network) into a raid1 group on the 
> clients, so if one of the server crashes the client is not doomed to
> die too. The task is not to sync writes (the clients will see their
> filesystems readonly), but just to get redundancy in cases of failing
> of one crucial device ...

There've even been people using RAID1 root devices over NBD. There's a
link to that on the NBD home page.

> There is an other project named on the nbd homepage - enbd. But 
> unfortunately this device is not part of the standard kernel. Would it
> serve my requests regarding raid1 failover better? (I read some remarks
> on lesser reliability on the net ...)

enbd is something more complex and somewhat different from NBD.

-- 
The amount of time between slipping on the peel and landing on the
pavement is precisely one bananosecond



Reply to: