[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[Nbd] nbd for large scale diskless linux



Hi!! I just set up a nbd server and booted a diskless linux client from 
it. I was quite impressed from the performance! It operated very stable
(I tried four or five years ago and switched back to nfs) even under huge 
load (cat /dev/nb0 > /dev/null on the client) - I got a satisfied 100Mbits 
ethernet link (99Mbits) and a good satisfied gigabit link (300Mbits). The 
load on the server (Dual AMD64 2GHz, 2GByte Mem) was moderate (15%), the 
load on the client (IBM X41 centrino laptop with 512MB) no problem at all ...

In my standard environment I'm using NFSv3 over TCP, but it is rather slow
with small files. The RPC overhead costs at least time and generates a lot
of traffic (even on files I just loaded - kernel caching seems not to be
optimal in conjunction with NFS).

Now the question: How does nbd-server would scale with 20 up to 100
clients on one server (no big deal for kernel-nfs). I got a lot of trouble
with the late user-space nfsd consuming nearly 100% of cpu under heavy 
load, so I might have the same trouble serving 50+ clients with 
nbd-server!?!? Does anyone on this tried to export a blockdev RO to a 
larger number of clients successfully?

Next question of interest to me - poor mans high availability: Would it
be possible to put two equal servers exporting just the same partion with
the same content (dd image over network) into a raid1 group on the 
clients, so if one of the server crashes the client is not doomed to
die too. The task is not to sync writes (the clients will see their
filesystems readonly), but just to get redundancy in cases of failing
of one crucial device ...

There is an other project named on the nbd homepage - enbd. But 
unfortunately this device is not part of the standard kernel. Would it
serve my requests regarding raid1 failover better? (I read some remarks
on lesser reliability on the net ...)

I like the idea of networked low level block devices very much!

Ciao,
	Dirk



Reply to: