[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#247300: libc6: malloc() never fails on 2.4 kernels, making processes crash



> I am not sure how one could describe a system crippled in this way as 
> "functional".  The limit itself would indeed be "functional",

Ok.  You could have stopped there.  This is going to be my last post
in this thread.

> [begin historical analogy]
> Using ulimit in this way would give one an environment a bit like that 
> of IBM's antique VM operating system for their 360 series of 
> computers, except even worse.  In VM, every user had his own small 
> "virtual disk", and the entire real disk was partitioned among them. 
> So, 200 users on a system with 50 MB of disk space meant each user got 
> exactly 250 KB of disk space, and some users would run out of disk 
> space even if the system disk (seen as an aggregate) was only a few 
> percent full.
> [end historical analogy]

A good analogy!  So, by your reasoning below, all the antiquated Un*x
administrators in the world using quotas on their filesystems should
immediately stop doing so, because it isn't practical!?

> Remember, /etc/initscript doesn't let me set limits on a per-user 
> basis.  Also, remember that ulimit for the number of processes applies 
> only on a per-user basis, not as a pool across the entire number of 
> processes.

Of course ulimits can and must be refined for queue schedulers, for
example.  That is beside the point.

> So let's see.  The system I'm currently running this on (a laptop) has 
> 1 GB of memory, 23 users in /etc/passwd.  The user with the most 
> processes (ignoring kernel processes like [kapmd] that don't take up 
> memory) currently has 61 processes.

Quite a lot.  No swap space?

> If we take these as hard limits, 
> then 1 GB / (61 * 23) = 747 KB of memory per process.  That isn't 
> enough even to run bash, xterm, or dhclient, to say nothing of the X 
> server or Mozilla or Emacs.

So you want 23 users on your laptop to be able to run 61 processes
each?  And under no circumstances do you want to see "out of memory:
killing process..."?  Then, I'm sorry to say: you're out of luck.

>  > For me the alternative is
> > clear: either enjoy the advantages and disadvantages of overcommitment
> > _or_ use "ulimit -v".
> 
> The second of those appears impractical, for the reasons I argue above.

It depends..  But I like overcommitment, too :-)

Regards,
Wolfram.



Reply to: