[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Nbd] [RESEND][PATCH 0/5] nbd improvements



Wouter, Josef, (& Eric)

> On 15 Sep 2016, at 11:49, Wouter Verhelst <w@...112...> wrote:
> 
> Hi,
> 
> On Fri, Sep 09, 2016 at 10:02:03PM +0200, Wouter Verhelst wrote:
>> I see some practical problems with this:
> [...]
> 
> One more that I didn't think about earlier:
> 
> A while back, we spent quite some time defining the semantics of the
> various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA
> write barriers. At the time, we decided that it would be unreasonable
> to expect servers to make these write barriers effective across
> different connections.

Actually I wonder if there is a wider problem in that implementations
might mediate access to a device by presence of an extant TCP connection,
i.e. only permit one TCP connection to access a given block device at
once. If you think about (for instance) a forking daemon that does
writeback caching, that would be an entirely reasonable thing to do
for data consistency.

I also wonder whether any servers that can do caching per
connection will always share a consistent cache between 
connections. The one I'm worried about in particular here
is qemu-nbd - Eric Blake CC'd.

A more general point is that with multiple queues requests
may be processed in a different order even by those servers that
currently process the requests in strict order, or in something
similar to strict order. The server is permitted by the spec
(save as mandated by NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA) to
process commands out of order anyway, but I suspect this has
to date been little tested.

Lastly I confess to lack of familiarity with the kernel side
code, but how is NBD_CMD_DISCONNECT synchronised across
each of the connections? Presumably you need to send it on
each channel, but cannot assume the NBD connection as a whole
is dead until the last tcp connection has closed?

-- 
Alex Bligh







Reply to: