Bug#859923: linux-image-4.9.0-2-amd64: mmap system call problem
On Tue, Apr 11, 2017 at 11:34:56PM +0100, Ben Hutchings wrote:
> On Tue, 2017-04-11 at 22:55 +0200, Fernando Santagata wrote:
> > On Mon, Apr 10, 2017 at 05:23:26PM +0100, Ben Hutchings wrote:
> > > Control: tag -1 moreinfo
> > >
> > > On Sun, 2017-04-09 at 12:09 +0200, Fernando Santagata wrote:
> > > > I think this is related to this thread in the linux-mm mailing list, dating
> > > > back to kernel version 4.7, the first one that exibits this behavior:
> > > >
> > > > https://lists.gt.net/linux/kernel/2528084
> > > >
> > > > This error shows even when using concurrent programming under Perl6, so it
> > > > seems to be really related to sharing memory.
> > > >
> > > > The last usable kernel in this respect is version 4.6.
> > >
> > > The warning message tells you how to work around this (add kernel
> > > parameter 'ignore_rlimit_data'). Doesn't that work?
> >
> > I only tried fiddling with ulimit, but ignore_rlimit_data works fine.
> >
> > I suggest adding either a warning in some doc file, or making that
> > option the default, lest you receive dozens of stupid bug reports :-)
>
> I certainly don't want to disable a resource limit globally - the
> failure to apply the data size limit was itself a bug. And I hope that
> it is not very common to set data size limits that are too low, so we
> don't have to document yet another change in the release notes that
> will mostly be ignored.
>
> Can you please check whether the data limit is already set in a shell
> (ulimit -d)?
Right now is at its default state:
$ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) 131072
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 31447
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 64
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 31447
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
previously I tried to increase it to 200000, but it didn't work.
I guess the problem is with the heap, not the data segment.
--
Fernando Santagata
Reply to: