[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Too many open files



Did you accidentally repost same again, or did you not see the earlier
reply posting?
Have a look at:
https://lists.debian.org/debian-user/2025/08/msg00571.html
The list isn't write-only.  :-)

< Subject: Re: Too many open files
< From: Michael Paoli <[13]michael.paoli@berkeley.edu>
< Date: Sat, 16 Aug 2025 03:44:19 -0700
<
< Thanks, and useful looking reporting,
< though not (yet) a bug report.
<
< Well, let's see ...
< from your
< > $ ulimit -a
< > open files                          (-n) 32768
< That should typically be way more than ample,
< even excessive - but that would be another
< issue (though possible to be related).
<
< Anyway, as for the limit, there's user's per process limit,
< both hard and soft - user can increase their soft limit,
< to a max of their hard limit, and they can decrease either,
< and generally only root can increase hard limit.
< So, one could be bumping into soft limit, though that
< seems unlikely as high as yours is (by default bash
< displays soft limit, -H for hard, -S for soft).
< Any given process can decrease its own limits.
< And there's generally a kernel total system-wide
< limit on max. total open files.
<
< Peeking on my fairly busy 12 Bookworm host ...
< # 2>>/dev/null ls -d /proc/[0-9]*/fd/* | wc -l
< 2171
< #
< That's all open files across all PIDs for all users/IDs
< that are presently open.
< $ cat /proc/sys/fs/file-max
< 1624514
< $
< And that's max (my current) kernel can have open simultaneously.
< We can also look at /proc/PID/limits to see the current limits for any
< given PID.
< Peeking on my same host again, I see a variety of settings for different PIDs:
< # cat /proc/[0-9]*/limits 2>>/dev/null | sed -ne '1p;/^Max open
< files/p' | sort | uniq -c | sed -e 's/  *$//;s/  //' | sort -k 5bn
<     1 Limit                     Soft Limit           Hard Limit
     Units
<    12 Max open files            256                  256
     files
<   744 Max open files            1024                 4096
     files
<    16 Max open files            4096                 4096
     files
<    12 Max open files            8192                 8192
     files
<     5 Max open files            65535                65535
     files
<     1 Max open files            65536                65536
     files
<     2 Max open files            1048576              1048576
     files
< #
< So, that first number gives a count for each of those unique lines,
< so all but the first (header) would show how many PIDs have that
< particular limit for files.
< And we should also be able to get count of current open files
< direct from kernel too ...
< $ cat /proc/sys/fs/file-nr
< 6560    0       1624514
< $
< That, respectively, gives
< currently allocated, presently free unused, and max
< number of file handles.
< Hmmm, I guess my
< /proc/[0-9]*/fd/*
< didn't count up everything.  Maybe some non-PID
< kernel stuff and/or other things that consume some
< file handles that don't show under PIDs in the proc filesystem.
<
< Anyway, not exactly an "answer" or "solution",
< but hopefully that gives you enough information to well isolate
< where things may be going sideways.
< E.g. some PID(s) that may be sucking up
< an unusually high number of file descriptors,
< or maybe some PID(s) have their limits lower
< than they should be, etc.
<
< Might also check dmesg, logs, etc.
< It's possible something else, or some other resource limit that's being
< bumped into may be triggering the warning about being unable to
< open more files - so it's possible the issue might actually be something else.
< Also, the strace(1) command may also be useful to help isolate.

On Mon, Aug 18, 2025 at 9:05 PM Ken Mankoff <mankoff@gmail.com> wrote:
>
> Hello,
>
> I'd like to report a bug but don't know what package, and reportbug says I should email this list. I'm running Debian Trixie KDE Wayland, and repeatedly seeing "Too many open files".
>
> Two examples:
>
> $ dolphin . # dolphin window opens, but this is printed:
> kf.solid.backends.fstab: Failed to acquire watch file descriptor Too many open files
>
> $ tail -f some_big_file # file still tailed, but this is printed:
> tail: inotify cannot be used, reverting to polling: Too many open files
>
> Also, output of =journalctl -b -1= after a hard crash included the same message. I think it caused a system freeze once. Most of the time, thing still seem to work.
>
> Maybe useful info:
>
> $ ulimit -n # 32768
>
> $ sudo lsof | awk '$5 == "REG" {print}' > list_REG
> $ cut -d" " -f1 list_REG | sort | uniq -c | sort -n| tail
>     596 xdg-deskt
>     808 xwaylandv
>     853 Isolated
>     871 emacs
>     985 plasmashe
>    1198 kwin_wayl
>    1205 slack
>    1877 chromium
>    2336 ferdium
>    3798 konsole
>
> Any guidance to fixing this would be much appreciated.
>
> Thanks,
>
>    -k.
>


Reply to: