Re: Why does Linux crash?
On Fri, 2011-04-22 at 15:35 -0500, Boyd Stephen Smith Jr. wrote:
> In <[🔎] 1303435546.3090.3.camel@zircon.lan.walnut.gen.nz>, Richard Hector
> wrote:
> >On Wed, 2011-04-20 at 12:04 +0200, Axel Freyn wrote:
> >> But the principal problem is: each of those limits/protections
> reduces
> >> the usability (e.g. if you have 2GB Ram, and you limit eclipse to
> 2GB,
> >> it will be killed by the Kernel as soon as it tries to use 2GB and
> 1
> >> byte from the SWAP
> >
> >Really?
> >
> >I'd have thought eclipse's request for more memory (malloc) would
> just
> >fail at that point - which it may or may not handle appropriately,
> and
> >may handle by exiting. I see no reason for the kernel to kill it.
>
> You'd think, right? Since malloc() has nice, documented ways it
> fails
> gracefully we should use them.
>
> We do, sometimes. However, in Linux with the default settings, that's
> not
> entirely true. When over-commit is on, there a minimal checks to see
> if a the
> memory requested can actually be satisfied. Instead, the map is made
> lazily,
> when the virtual memory pages are accessed. Unfortunately, it's
> possible for
> the kernel be unable to satisfy a mapping when it is needed. So,
> instead of
> failing on "mem = malloc(count)", where the userland process can
> handle things
> cleanly, the we fail on "*mem = things" were everything has to be
> handled in
> kernel space. Enter the OOM killer.
>
> These minimal checks may include what is set by ulimit; I'm not sure.
Yes, I was aware of overcommit (though not the precise details). I guess
I assumed that a nice specific requirement like ulimit would be honoured
at malloc() time - after all, it's likely to remain fixed, while some
other process could free a bunch of memory in time for us to use it -
but assumptions are dangerous, of course :-)
Thanks,
Richard
Reply to: