Re: [Nbd] Question about the expected behaviour of nbd-server for async ops
- To: Wouter Verhelst <w@...112...>
- Cc: nbd-general@lists.sourceforge.net
- Subject: Re: [Nbd] Question about the expected behaviour of nbd-server for async ops
- From: Goswin von Brederlow <goswin-v-b@...186...>
- Date: Mon, 30 May 2011 14:13:04 +0200
- Message-id: <877h988jzj.fsf@...860...>
- In-reply-to: <20110530113649.GX31747@...510...> (Wouter Verhelst's message of "Mon, 30 May 2011 13:36:49 +0200")
- References: <87oc2m28o7.fsf@...860...> <6DBA2A6208847F844397DB62@...873...> <87wrh9bz0c.fsf@...860...> <20110529120632.GB31747@...510...> <87tycdwp50.fsf@...860...> <20110529183212.GI31747@...510...> <87oc2k8orr.fsf@...860...> <20110530113649.GX31747@...510...>
Wouter Verhelst <w@...112...> writes:
> On Mon, May 30, 2011 at 12:29:44PM +0200, Goswin von Brederlow wrote:
>> Which brings me to a little problem in your design:
>>
>> Imagine what happens if a client requests a 1GB read. In my case the
>> splice() blocks till the write thread starts sending out the data to the
>> client. In your case you allocate 1GB of ram. Ups.
>
> If we run out of memory, in my case I'll just return an error. There's
> nothing in the protocol that specifies I have to support 1GB read
> requests. Indeed, for the longest time the largest that we did support
> in practice was about 1MB read requests, yet the number of times that
> this produced an error was very low (the bug was fixed because one
> high-volume user of nbd saw nbd-server crashing about once a month).
>
> I don't think that not supporting extremely large requests is going to
> be a problem in practice, providing we return a proper error code.
Maybe there should be an option field for this so client and server can
agree on a reasonable size. The client MUST then split larger requests
into smaller chunks. Getting an error just because the scatter-gather
code happens to be exceedingly successfull for once seems like a bad
idea. Harddrives have a similar setting to limit the maximum size of a
DMA transfer they can handle.
>> I think you will have a problem dealing with large read/write requests
>> in your setup. Your design doesn't allow for the read/write to be done
>> in managable chunks.
>
> There will indeed be an issue if the request is insanely large. But with
> modern hardware, serving several requests in parallel of up to several
> tens of megabytes should not be an issue, even with that design. And
> even 'several tens of megabytes' is not going to be seen in the real
> world, except by malicious clients (and I don't mind not doing what
> malicious clients expect me to do...).
ACK.
MfG
Goswin
Reply to: