[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#865549: [Pkg-rust-maintainers] Bug#865549: cargo still segfaults / rustc still FTBFS with the newest fixed kernel



Control: tag -1 upstream patch moreinfo

On Fri, 30 Jun 2017 08:57:00 +0000 Ximin Luo <infinity0@debian.org> wrote:
> Ben Hutchings:
> > On Thu, 2017-06-29 at 17:45 +0000, Ximin Luo wrote:
> >> Ximin Luo:
> >>> [..]
> >>>
> >>> The segfault occurs on various commands at different frequencies and after 
> >>> differing amounts of time (but all less than a few seconds). The stack traces 
> >>> are all different too. [..]
> >>
> >> With some help from arielby from #rust-internals, we noticed that the
> >> stack getting allocated was always 192KB even though `ulimit -s` says
> >> 8192 (i.e. 8MB),
> > 
> > That's normal.
> > 
> >> and when the program tries to grow beyond this, is when the segfaults
> >> occur. Hope that's useful.
> > 
> > That's obviously not.  A page fault below the stack mapping and not too
> > far below the stack pointer should cause the mapping to expand, up to
> > the limit, without the process having to do anything about it.
> > 
> > The shell ulimit command, and the underlying setrlimit(2) system call,
> > set the limit for expansion of the stack mapping.  Expansion is also
> > restricted by the requirement of a gap between the stack and the next
> > mapping below, but so long as the size limit is not 'unlimited' this
> > *should* make no difference.  The gap used to be 1 page and is now
> > configurable but defaults to 256 pages.
> > 
> > The stack gap used to be included in the range shown in /proc/*/maps
> > and was counted toward the stack limit.  The first version of the fix
> > did not change that, so it effectively reduced all existing limits by
> > 255 pages.  The second version of the fix excludes the stack gap from
> > both.
> > 
> > [..]
> 
> By the way, PAGESIZE on these ppc64el machines is 64KB not 4KB, so a
256-page stack guard would be 16MB which is way higher than the default
`ulimit -s` of 8MB.
> 
> Although you said that the stack guard (in theory) should not be part
of the stack limit, the fact that a 24MB limit made the segfaults go
away yet a 16MB did not, suggests there might be a link. What do you
think?

We talked about this on IRC and you also said that Rust allocates its
*own* stack guard page.  That can never have worked for the main stack
and will also now inhibit stack growth.  With 64 kiB pages, the stack
will never be able to grow with the default stack limit and stack gap
sizes.

Please can you test using a kernel with the attached patch? 
Instructions for building a patched kernel package are at:
https://kernel-handbook.alioth.debian.org/ch-common-tasks.html#s-common-official
I tested this with a Rust program on amd64 with a stack limit of 1 MiB
and it seems to work as expected.

Ben.

-- 
Ben Hutchings
The world is coming to an end.	Please log off.
From aceedcd9ec667438ce82ff0c67b5d02fe430d067 Mon Sep 17 00:00:00 2001
From: Ben Hutchings <ben@decadent.org.uk>
Date: Tue, 4 Jul 2017 11:58:36 +0100
Subject: [PATCH] mmap: Ignore VM_NONE mappings when checking for space to
 expand the stack

Some user-space run-times (in particular, Java and Rust) allocate
their own guard pages in the main stack.  This didn't work well
before, but it can now block stack expansion where it is safe and would
previously have been allowed.  Ignore such mappings when checking the
size of the gap before expanding.

Reported-by: Ximin Luo <infinity0@debian.org>
References: https://bugs.debian.org/865416
Fixes: 1be7107fbe18 ("mm: larger stack guard gap, between vmas")
Cc: stable@vger.kernel.org
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
[bwh: Adjust context to apply to stretch branch]
---
 mm/mmap.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index a5e3dcd75e79..19f3ce04f24f 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2243,7 +2243,14 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
 	gap_addr = address + stack_guard_gap;
 	if (gap_addr < address)
 		return -ENOMEM;
-	next = vma->vm_next;
+	/*
+	 * Allow VM_NONE mappings in the gap as some applications try
+	 * to make their own stack guards
+	 */
+	for (next = vma->vm_next;
+	     next && !(next->vm_flags & (VM_READ | VM_WRITE | VM_EXEC));
+	     next = next->vm_next)
+		;
 	if (next && next->vm_start < gap_addr) {
 		if (!(next->vm_flags & VM_GROWSUP))
 			return -ENOMEM;
@@ -2323,11 +2330,17 @@ int expand_downwards(struct vm_area_struct *vma,
 	if (error)
 		return error;
 
-	/* Enforce stack_guard_gap */
+	/*
+	 * Enforce stack_guard_gap, but allow VM_NONE mappings in the gap
+	 * as some applications try to make their own stack guards
+	 */
 	gap_addr = address - stack_guard_gap;
 	if (gap_addr > address)
 		return -ENOMEM;
-	prev = vma->vm_prev;
+	for (prev = vma->vm_prev;
+	     prev && !(prev->vm_flags & (VM_READ | VM_WRITE | VM_EXEC));
+	     prev = prev->vm_prev)
+		;
 	if (prev && prev->vm_end > gap_addr) {
 		if (!(prev->vm_flags & VM_GROWSDOWN))
 			return -ENOMEM;

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: