Re: glibc hppa build failure - ulimit
On Sun, May 15, 2005 at 02:52:58PM +0100, Matthew Wilcox wrote:
> On Sat, May 14, 2005 at 03:15:36PM -0400, Carlos O'Donell wrote:
> > On Fri, May 13, 2005 at 09:26:10PM +0100, Matthew Wilcox wrote:
> > > > If you want to change glibc at this point, discuss with Carlos; I can't
> > > > take care of it. Just getting -22 built has taken most of my free time
> > > > for this week.
> > >
> > > Carlos? This seems like your fault, how do *you* want to fix it? ;-)
> > We have always had floating stacks in hppa, that's not the issue.
> > The issue here is that someone set the ulimit to 1GB. What does that
> > have to do with me? :)
> The ulimit's been 1GB since 2000. Actually, I think that's only true
> if you ssh into a machine. ssh tries to set the ulimit to infinity
> and we throttle that to 1GB. So if you were always building locally,
> you'd never notice the problem, the stack would be 80MB.
Where is the stack limit set to 1GB in the kernel? I don't see any
throttling code to that effect.
I see that the default process limit is set to 10 * the _STK_LIM, so 80MB
as you say. I don't see any sort of 1GB throttling besides the VM code
which has to throttle to [RLIMIT_STACK].rlim_cur.
If ssh sets the rlimit to infinity then new threads will have 8MB
stacks, which is reasonable.
I never build locally, I always ssh into by build system (and run screen
I just don't seem to understand the problem. Lets see if I can put this
A. Someone set ulimit -s to 1GB on paer.
Glibc's algorithm is:
If the limit is infinity, then enforce maximum.
If the limit is not infinity, then use the limit.
Thus we get threads with 1GB stacks.
Solution: ulimit -s infinity.
or ulimit -s 8192
This is a purely administrative issue.
B. The kernel, after seeing RLIMIT_INFINITY changes the value
to 1GB, so when glibc calls getrlimit it sees 1GB instead of
This is not what the kernel does, but if anyone can
show me this throttling code I'd be happy to change
Solution: The kernel should leave RLIMIT_INFINITY in