[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Too many open files



Hi Greg,

On 2025-08-19 at 04:34 -07, Greg Wooledge <greg@wooledge.org> wrote...
> On Mon, Aug 18, 2025 at 21:52:30 -0700, Ken Mankoff wrote:
>> $ cat /proc/sys/fs/file-max
>> 1000000
>> 
> That number actually looks suspiciously low *and* artificial, though I
> agree that it appears you're not bumping into that limit at the moment.
>
> hobbit:~$ cat /proc/sys/fs/file-max
> 9223372036854775807

I'm not sure where the 1000000 came from, but I updated to your number (also seen elsewhere, and on this system when booting from a live disk) with

sysctl -w fs.file-max=9223372036854775807

> If you're seeing the message "Too many open files", you're *probably*
> hitting EMFILE (the per-process limit) rather than ENFILE (system-wide),
> but it would be helpful to see that definitively, e.g. with strace.
>
> If you are hitting EMFILE, as I suspect, then all of your investigation
> with (for example) sudo | awk | sort is not helpful, because that's
> looking at the system-wide state, when you need to be looking at a
> single process at a time.

I haven't seen the error in a few days since upgrading the fs.file-max limit, and adjusting fs.inotify.max_user_instances to 512 per advice from Jan Claeys.

> On my newly-upgraded-to-trixie system, my per-process hard limit is
> quite a bit higher than yours:
>
> hobbit:~$ ulimit -Hn
> 524288
> hobbit:~$ ulimit -Sn
> 1024

I cannot set ulimit -Hn above the current value of 32768. But for now, things seem to be working better.

I will debug more deeply with strace if the issue returns.

  -k.


Reply to: