[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: file descriptors??



On 7 Jan 1998, Miquel van Smoorenburg wrote:

> In article <[🔎] Pine.LNX.3.96.980107103756.5460C-100000@siva.taz.net.au>,
> Craig Sanders <cas@taz.net.au> wrote:
> >i've tried applying the "gt256 fd patch" but that causes some NFS
> >problems (i use nfs to mount my debian mirror for upgrades) which would
> >probably go away if netstd and netbase were recompiled with the new fd
> >limit.
> 
> What I do is something different. I put this in /etc/initscript:
> 
> # Set # of fd's to 256 for all processes.
> ulimit -S -n 256
> 
> That sets the soft limit for all processes to 256 fds. It can be raised
> by an individual process if needed. My /etc/init.d/squid script contains:
> 
> MAXFD=`ulimit -H -n`
> if [ "$MAXFD" -gt 1024 ]
> then
>         MAXFD=1024
> fi
> ulimit -n $MAXFD
> 
> So this way, the number of file descriptors for squid is 1024 max, but
> for all other proceses it's limited to 256.

i'll have to think about this.  Does this mean that you don't have to
actually patch the kernel any more?  all you have to do is set the
appropriate values in /proc/sys/kernel and then increase the ulimit?

i just ran 'ulimit -H -n' on my linux 2.0.32 machines and got 256.  Did
you have to recompile the kernel to get more than that? what kernel
version are you running?

i think there's something quite basic that i must be missing...something
must have changed. it used to be that you hacked limits.h and fs.h
in /usr/src/linux/include/linux and recompiled the kernel and then
recompiled whatever apps needed more fd's. sounds like that's not true
any more.


> >btw, as background info for this, i'm building a squid box with dual
> >ppro 200 cpus, 512mb memory, 40GB disk (32gb cache, 8gb system,
> >logging, etc and hot-swap root fs).
>
> That's a nice box. But don't expect any extra performance because it's
> a SMP machine - squid is one monolithic process and will not benefit
> from a second processor.

yes.  very nice.  wish i had one here at home.

the extra cpu is for the redirectors. this machine will also have to do
a lot of filtering (using a redirector program rather than squid's acls
so we can offload the processing load to the 2nd cpu). it's going into a
school network and has to to provide the teachers with "access control".

> 32 GB cache and 512MB of mem sounds about right for squid, you could
> do with 256 or 384 MB if you'd run squid-novm (but squid-novm uses a
> lot more file descriptors).

i don't believe in squid-novm :-). file descriptors are a scarcer
resource than memory.


> >this box is to be installed remotely - several thousand kilometres
> >away. it's being built with the latest unstable now because i don't
> >want to have to do a libc5 to libc6 upgrade remotely when hamm gets
> >released...and debian's upgradability (for bug fixes and security
> >fixes) is absolutely vital for a remote server like this, imo.
>
> Don't worry - we have exactly the same setup (only 9GB of cache tho')
> and it hasn't crashed ever. Except for the libc6 problems, but these
> are now solved. I sacrificed myself as guinea pig when the box was
> still installed here locally, and I think all bugs are gone now.

yep, i've built several debian-based squid proxies of around the
256MB RAM and 9GB disk size - they work wonderfully.  These machines
have worked so well that a few people who started out with anti-linux
attitudes have admitted that they were wrong :-)

(you may remember that i was the one who made the first debian package
for squid back in june '96, and then ran out of time to maintain it)

craig

--
craig sanders


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
debian-devel-request@lists.debian.org . 
Trouble?  e-mail to templin@bucknell.edu .


Reply to: