[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: One new nbd-runner project to support the Gluster/Ceph/Azure, etc



On 2019/4/12 16:00, Wouter Verhelst wrote:
On Fri, Apr 12, 2019 at 01:54:16PM +0800, Xiubo Li wrote:
On 2019/4/12 3:50, Wouter Verhelst wrote:
Hi,

Sorry about the late reply; I recently moved continents, and am still
getting used to my new life...
Hi Wouter

Interesting and no worry :-)

Thanks very much for your feedback and great ideas.


On Mon, Mar 25, 2019 at 09:01:30PM +0800, Xiubo Li wrote:
Hi ALL,

The NBD is one great project :-)

Currently there are many Distributed Storages projects such as the Gluster and
Ceph are widely use, but they couldn't support NBD directly. For example, for
the Gluster, if we want to export the volume file as one NBD block device we
must mount the volume first and then use the nbd utils to export it again,
which a little ugly.

To make it more efficient, we also need one common and generic utils, something
like [1], to support them directly. And good news is that I have a working code
with most basic things @nbd-runner project[2]. It is hosted under the gluster/
currently and I'd like to request a repository under [3] to host this.
I'm not necessarily opposed to this, but it would be nice to understand
how your nbd-runner works, and is different from, say, nbdkit? It seems
to me that they have similar goals.
Yeah, they have similar goals, but there also have many differences and
there has server limitation for use to use the great nbdkit in some cases.

For the nbd-runner we just want it works like a C/S utils and there will be
one systemd nbd-runner service to handle the backstore stuff, and the
nbd-cli will support the create/delete/map/unmap/list commands. Then there
is no need for users to care about the Backstore storage stuff, once the
nbd-runner is fired what they need to focus on is in the Client side. And
the nbd-cli in will parse all the options.
Cool.

What protocols do the two talk?

Currently the nbd-runner is using the the RPC protocol for the C/S. And the nbdkit is using the NBD protocol.

Do they use the NBD negotiation as
defined in the proto.md document, or do they speak some custom thing?

If a client that isn't nbd-cli (e.g., qemu) tried to connect to
nbd-runner, would it be able to, or would that not be possible?

For now the nbd-runner project only supports the nbd-cli, in the near future we can add or switch to the NBD protocol. Then we can support all the exist nbd tools.


The 'create/delete' commands will help to create/delete the devices/files in
the backstore pools/volumes. And the 'map/unmap' commands will do the
backstore devices/files <--> /dev/nbdX mapping/unmapping stuff. And the
'list' command here will do like the linux 'ls' command.

For the Gluster/Ceph/Azure handlers we need to create/map hundreds of blocks
in several storage pools/volumes in any time when needed, such as in the
Openshift  when any new pods are being created.

We also need one easy utils, like the iscsi utils, to help us remap the
/dev/nbdXX <---> backstore device/file when the nbd-runner daemon or the
node running the nbd-runner is restarted.
nbd-client actually has that ability:

- nbdtab allows you to predefine device nodes
- the -persist option is meant to reconnect a device when the connection
   drops (however, this ability does not work very well, because the
   kernel API is not well defined).

Yeah, it is. I have gone through the framwork and some core code of nbd and NBD.ko.

This still have found some new bugs in the NBD.ko, not get a time to debug and fix it before I can engage in it I must finish this.

And for now for a work around we can emulate it by using the state check/unmap/map for the remap stuff.


The same is currently not true for the server. It is one of my long-term
goals to be able to change server configuration at run time, without a
restart. Currently you can only add devices, not remove them -- it's
something I want to work on, but I'm not paid to do this, so don't have
time to do the refactoring that nbd-server urgently needs...

Later we can make the nbd-runner and the nbd-client projects work together if possible.

Thanks,
BRs
Xiubo
That means we need to save the backstore device infos into one config
file then reload them when restarting. This is very similar to what
the LIO/targetci does. So we need one client daemon to do the ping and
remap stuff, just like the iscsid, it is very easy to support this in
nbd-runner project, and this is to be done.

Above all we want the great NBD could work as easy/simple as the current
iscsi utils does in the Openshift.
Sure. iSCSI is kindof complicated when compared to NBD though ;-)



Reply to: