[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Nbd] Deadlock Issues with Local NBD Server



On Wed, Sep 28, 2011 at 10:10:45PM +0100, Alex Bligh wrote:
> --On 27 September 2011 20:26:48 -0700 Adam Cozzette
> <acozzette@...946...> wrote:
> 
> >So it seems to me that the deadlock situation is essentially the same
> >whether you're swapping to a block device or just writing out dirty
> >pages. Or am I mistaken about that? Is this something that is so unlikely
> >to occur that there is no point in worrying about it?
> 
> I think it is *not* so unlikely there is no point worrying about it.
> 
> My excuse to date for not worrying about it is that in our application
> nbd is accessed directly by hypervisors, so I /think/ we don't suffer
> from that issue.
> 
> IIRC memory allocation on stock nbd-server in the write path is
> minimal (but minimal != zero).
> 
> Now you mention it, I am trying to work out how it works even with a
> remote server. If all the pages are dirty waiting to be written to
> nbd, and a write is started which causes a GFP_KERNEL allocation to
> allocate the skbuff for the tcp packet, and there is no free memory,
> I am not sure how any progress is made. I get the feeling this at least
> must be soluble, or nfs and iscsi would be dead in the water.

I found that there was some talk of fixing these deadlock issues a few years
back:

http://lwn.net/Articles/195416/

I think you're right that allocating memory for TCP packets is a potential
problem. But, as you said, it sounds like there must be some way around it since
so many things would be broken otherwise.

-- 
Adam Cozzette
Harvey Mudd College Class of 2012



Reply to: