[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: One new nbd-runner project to support the Gluster/Ceph/Azure, etc



On 2019/4/12 3:50, Wouter Verhelst wrote:
Hi,

Sorry about the late reply; I recently moved continents, and am still
getting used to my new life...

Hi Wouter

Interesting and no worry :-)

Thanks very much for your feedback and great ideas.


On Mon, Mar 25, 2019 at 09:01:30PM +0800, Xiubo Li wrote:
Hi ALL,

The NBD is one great project :-)

Currently there are many Distributed Storages projects such as the Gluster and
Ceph are widely use, but they couldn't support NBD directly. For example, for
the Gluster, if we want to export the volume file as one NBD block device we
must mount the volume first and then use the nbd utils to export it again,
which a little ugly.

To make it more efficient, we also need one common and generic utils, something
like [1], to support them directly. And good news is that I have a working code
with most basic things @nbd-runner project[2]. It is hosted under the gluster/
currently and I'd like to request a repository under [3] to host this.
I'm not necessarily opposed to this, but it would be nice to understand
how your nbd-runner works, and is different from, say, nbdkit? It seems
to me that they have similar goals.

Yeah, they have similar goals, but there also have many differences and there has server limitation for use to use the great nbdkit in some cases.

For the nbd-runner we just want it works like a C/S utils and there will be one systemd nbd-runner service to handle the backstore stuff, and the nbd-cli will support the create/delete/map/unmap/list commands. Then there is no need for users to care about the Backstore storage stuff, once the nbd-runner is fired what they need to focus on is in the Client side. And the nbd-cli in will parse all the options.

The 'create/delete' commands will help to create/delete the devices/files in the backstore pools/volumes. And the 'map/unmap' commands will do the backstore devices/files <--> /dev/nbdX mapping/unmapping stuff. And the 'list' command here will do like the linux 'ls' command.

For the Gluster/Ceph/Azure handlers we need to create/map hundreds of blocks in several storage pools/volumes in any time when needed, such as in the Openshift  when any new pods are being created.

We also need one easy utils, like the iscsi utils, to help us remap the /dev/nbdXX <---> backstore device/file when the nbd-runner daemon or the node running the nbd-runner is restarted. That means we need to save the backstore device infos into one config file then reload them when restarting. This is very similar to what the LIO/targetci does. So we need one client daemon to do the ping and remap stuff, just like the iscsid, it is very easy to support this in nbd-runner project, and this is to be done.

Above all we want the great NBD could work as easy/simple as the current iscsi utils does in the Openshift.

We can find more details in the README [2].

Will this make sense ?


I might be wrong though ;-)

Now the Gluster handler coding is done,  the Ceph and Azure is in progress now.
Cool.

Now is trying to release this in Fedora.

Thanks.
BRs
Xiubo


[2] https://github.com/gluster/nbd-runner





Reply to: