Re: [Nbd] Deadlock Issues with Local NBD Server
- To: Adam Cozzette <acozzette@...946...>
- Cc: nbd-general@lists.sourceforge.net
- Subject: Re: [Nbd] Deadlock Issues with Local NBD Server
- From: Alex Bligh <alex@...872...>
- Date: Wed, 28 Sep 2011 22:10:45 +0100
- Message-id: <D7E639E01DAC0C2658B6D3D7@...874...>
- Reply-to: Alex Bligh <alex@...872...>
- In-reply-to: <20110928032648.GD4000@[192.168.0.12]>
- References: <20110927105745.GB1361@[192.168.0.12]> <179DA1A021E06C250977C5CA@...874...> <20110928032648.GD4000@[192.168.0.12]>
--On 27 September 2011 20:26:48 -0700 Adam Cozzette <acozzette@...946...>
wrote:
So it seems to me that the deadlock situation is essentially the same
whether you're swapping to a block device or just writing out dirty
pages. Or am I mistaken about that? Is this something that is so unlikely
to occur that there is no point in worrying about it?
I think it is *not* so unlikely there is no point worrying about it.
My excuse to date for not worrying about it is that in our application
nbd is accessed directly by hypervisors, so I /think/ we don't suffer
from that issue.
IIRC memory allocation on stock nbd-server in the write path is
minimal (but minimal != zero).
Now you mention it, I am trying to work out how it works even with a
remote server. If all the pages are dirty waiting to be written to
nbd, and a write is started which causes a GFP_KERNEL allocation to
allocate the skbuff for the tcp packet, and there is no free memory,
I am not sure how any progress is made. I get the feeling this at least
must be soluble, or nfs and iscsi would be dead in the water.
--
Alex Bligh
Reply to: