[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: file descriptors??



On 7 Jan 1998, Miquel van Smoorenburg wrote:

> In article <[🔎] Pine.LNX.3.96.980107084321.5460B-100000@siva.taz.net.au>,
> Craig Sanders <cas@taz.net.au> wrote:
> >
> >On Tue, 6 Jan 1998, Elie Rosenblum wrote:
> >
> >> And thus spake Craig Sanders, on Wed, Jan 07, 1998 at 01:52:06AM +1100:
> >> > is there any debian policy on number of file descriptors compiled
> >> > into the kernel? (and also in limits.h in libc6-dev - AFAIK
> >> > pretty much everything that uses select() will need to be
> >> > recompiled if the limit is increased).
> >>
> >> This has been sysctl configurable in the runtime kernel since at
> >> most 2.0, probably 1.3.
>
> Not true. That's the global limit. The per-process limit is hardcoded
> at 256 fds per process in the 2.0.x kernel

yep.  i realised after i sent this last message that i should have specified
that i was talking about the per process fd limit, not the global limit.

> >	root@siva [08:41:43] kernel# cd /proc/sys/kernel/
> >	root@siva [08:41:58] kernel# ls -l *-max *-nr
> >	-rw-r--r--   1 root     root            0 Jan  7 08:40 file-max
> >	-r--r--r--   1 root     root            0 Jan  7 08:40 file-nr
> >	-rw-r--r--   1 root     root            0 Jan  7 08:40 inode-max
> >	-r--r--r--   1 root     root            0 Jan  7 08:40 inode-nr
> >	root@siva [08:42:41] kernel# echo 2048>file-max; echo 8192>inode-max
> >	bash: file-max: Bad file descriptor
> >	bash: inode-max: Bad file descriptor
> 
> You forgot a space. Try echo 2048 > file-max. 2048>file-max means something
> entirely different in shell-syntax..

that works now.  which is odd because i cut and pasted Elie's example.

anyway, as you point out, that only sets the global limit, not the
per-process limit.


i've tried applying the "gt256 fd patch" but that causes some NFS
problems (i use nfs to mount my debian mirror for upgrades) which would
probably go away if netstd and netbase were recompiled with the new fd
limit.  I feel that it's a bit unreasonable to expect debian users to
recompile the entire system if they happen to be building a server (e.g.
squid proxy or apache web server) that needs more than 256 fds.  Given
that debian makes an excellent web server or proxy or internet gateway
machine out of the box it's not an uncommon thing to want to do...

btw, as background info for this, i'm building a squid box with dual
ppro 200 cpus, 512mb memory, 40GB disk (32gb cache, 8gb system, logging,
etc and hot-swap root fs). this machine is expected to be able to
handle at least 150 simultaneous users (with netscape's default of 4
connections at once) at any given moment. this is expected _average_
usage. peak usage could be double that. squid could quite easily
require several thousand file descriptors under peak load. to add more
temptation for murphy, this box is to be installed remotely - several
thousand kilometres away. it's being built with the latest unstable now
because i don't want to have to do a libc5 to libc6 upgrade remotely
when hamm gets released...and debian's upgradability (for bug fixes and
security fixes) is absolutely vital for a remote server like this, imo.

you got any good ideas on what to do about the fd limits? is my
assumption that increasing the per process limit will require
re-compiling just about every package (e.g. squid, apache, netstd,
netbase, libc6, ..... etc) correct or have i misunderstood something
fundamental?

how do you handle this issue on your squid box(es)?

craig

--
craig sanders


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
debian-devel-request@lists.debian.org . 
Trouble?  e-mail to templin@bucknell.edu .


Reply to: