[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: One new nbd-runner project to support the Gluster/Ceph/Azure, etc



On 2019/4/12 16:03, Richard W.M. Jones wrote:
On Fri, Apr 12, 2019 at 01:54:16PM +0800, Xiubo Li wrote:
On 2019/4/12 3:50, Wouter Verhelst wrote:
Hi,

Sorry about the late reply; I recently moved continents, and am still
getting used to my new life...
Hi Wouter

Interesting and no worry :-)

Thanks very much for your feedback and great ideas.


On Mon, Mar 25, 2019 at 09:01:30PM +0800, Xiubo Li wrote:
Hi ALL,

The NBD is one great project :-)

Currently there are many Distributed Storages projects such as the Gluster and
Ceph are widely use, but they couldn't support NBD directly. For example, for
the Gluster, if we want to export the volume file as one NBD block device we
must mount the volume first and then use the nbd utils to export it again,
which a little ugly.

To make it more efficient, we also need one common and generic utils, something
like [1], to support them directly. And good news is that I have a working code
with most basic things @nbd-runner project[2]. It is hosted under the gluster/
currently and I'd like to request a repository under [3] to host this.
I'm not necessarily opposed to this, but it would be nice to understand
how your nbd-runner works, and is different from, say, nbdkit? It seems
to me that they have similar goals.
Yeah, they have similar goals, but there also have many differences
and there has server limitation for use to use the great nbdkit in
some cases.

For the nbd-runner we just want it works like a C/S utils and there
will be one systemd nbd-runner service to handle the backstore
stuff, and the nbd-cli will support the create/delete/map/unmap/list
commands. Then there is no need for users to care about the
Backstore storage stuff, once the nbd-runner is fired what they need
to focus on is in the Client side. And the nbd-cli in will parse all
the options.

The 'create/delete' commands will help to create/delete the
devices/files in the backstore pools/volumes. And the 'map/unmap'
commands will do the backstore devices/files <--> /dev/nbdX
mapping/unmapping stuff. And the 'list' command here will do like
the linux 'ls' command.
It sounds to me like you'd be better off writing a front end which
manages nbdkit instances.

Nbdkit supports nearly the full NBD protocol, has dozens of plugins
and filters already, and is battle-tested and interoperable with all
the other NBD implementations out there.  Whereas what you've got is a
server which supports a tiny subset of NBD and only Gluster.  What you
do have however is the RPC / front end stuff, so concentrating your
efforts on that and managing systemd/nbdkit seems better to me ...

Nbdkit doesn't have a gluster plugin right now, but that's because we
have tried to avoid duplicating functionality which is already present
in qemu-nbd; it's possible -- indeed easy -- to write a gluster plugin.

I went through the great nbdkit project today, yeah, it is very easy to add new handlers. And the nbd-runner project means to keep the framework the same with iscsi stuff, and we can also very easily add new handlers to it, and we will only focus on supporting the Distribute Storages, that's all. Then we can make it as simple as possible to integrated to the currently openshift project.

Not only the Gluster need this, Ceph/Azure also eager to support the NBD too. As discussed with them, they also really need the simple create/delete/map/unmap/list, etc sub commands to handle the related stuff. With this we also can do a client side daemon to emulate the iscsid utils stuff.

The most important thing is that there is already one managing gluster-block project, which are managing the tcmu-runner, one iscsi backend, one of the goals for thd nbd-runer project is that the gluster-block won't change to much to support the nbd-runner, the nbd proto.

So if we add another managing stuff between the nbdkit and the gluster-block, it will be a little complicate then, because the OCS will intergrate the gluster-block again.

That will be like: OCS --> gluster-block --> some new manage utils --> nbdkit stuff, or something like this.


Thanks Rich.
BRs
Xiubo


Rich.

For the Gluster/Ceph/Azure handlers we need to create/map hundreds
of blocks in several storage pools/volumes in any time when needed,
such as in the Openshift  when any new pods are being created.

We also need one easy utils, like the iscsi utils, to help us remap
the /dev/nbdXX <---> backstore device/file when the nbd-runner
daemon or the node running the nbd-runner is restarted. That means
we need to save the backstore device infos into one config file then
reload them when restarting. This is very similar to what the
LIO/targetci does. So we need one client daemon to do the ping and
remap stuff, just like the iscsid, it is very easy to support this
in nbd-runner project, and this is to be done.

Above all we want the great NBD could work as easy/simple as the
current iscsi utils does in the Openshift.

We can find more details in the README [2].

Will this make sense ?


I might be wrong though ;-)

Now the Gluster handler coding is done,  the Ceph and Azure is in progress now.
Cool.
Now is trying to release this in Fedora.

Thanks.
BRs
Xiubo


[2] https://github.com/gluster/nbd-runner




Reply to: