[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

swap filled, but still much free memory - Slab issue



My Debian 10 (buster) server swaps (it became indeed very slow)
while there is still much free memory:

joooj:~> free
              total        used        free      shared  buff/cache   available
Mem:         489260      216252        7704          24      265304      257188
Swap:        360376      360316          60

joooj:~> vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 3  2 359612   4800    464 267648   41   38   195    93   11    6  1  1 95  3  0

And what atop says:

MEM | tot   477.8M | free    4.6M | cache   7.0M | buff    0.1M | slab  329.3M |  shmem   0.0M | shrss   0.0M | vmbal   0.0M | hptot   0.0M | hpuse   0.0M |
SWP | tot   351.9M | free   26.9M |              |              |              |               |              |              | vmcom   1.1G | vmlim 590.8M |
PAG | scan 27756e4 | steal 1538e5 | stall      0 |              |              |               |              |              | swin 30856e3 | swout 2884e4 |

I can see some contradiction concerning the "wasted" part:
* With "free", buff/cache takes 265304 KB.
* With "vmstat", this is cache (267648) rather than buff (464).
* With "atop", both cache and buff values are small!
  But there is slab.

"cat /proc/meminfo" is similar to "atop":
MemTotal:         489260 kB
MemFree:            7180 kB
MemAvailable:     256072 kB
Buffers:             216 kB
Cached:             8256 kB
SwapCached:         2368 kB
[...]
SwapTotal:        360376 kB
SwapFree:          22096 kB
[...]
Slab:             336436 kB
SReclaimable:     255844 kB
SUnreclaim:        80592 kB
[...]

It could potentially be this issue:

  https://cloudlinux.zendesk.com/hc/en-us/articles/115004738025-What-to-do-if-Slab-cache-grows-and-overall-server-performance-is-bad

Indeed, I had a partition that became full for a short period, but
this is no longer the case. "vm.vfs_cache_min_ratio" does not seem
to exist. And I have "vm.vfs_cache_pressure = 100" in my case (the
default).

"slabtop -s c" outputs:

 Active / Total Objects (% used)    : 664331 / 773696 (85.9%)
 Active / Total Slabs (% used)      : 52106 / 52106 (100.0%)
 Active / Total Caches (% used)     : 89 / 116 (76.7%)
 Active / Total Size (% used)       : 253351.05K / 305750.80K (82.9%)
 Minimum / Average / Maximum Object : 0.01K / 0.39K / 8.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
137466 121185  88%    1.05K  35409       30   1133088K ext4_inode_cache
171948 134691  78%    0.19K   4866       42     38928K dentry
 66612  20516  30%    0.57K   2379       28     38064K radix_tree_node
 37170  37170 100%    0.38K    917       42     14672K kmem_cache
  3752   3752 100%    2.00K    378       16     12096K kmalloc-2048
  2081   2081 100%    4.00K    373        8     11936K kmalloc-4096
[...]

but 1133088K is much larger than 253351.05K. How can this be possible?

In "cat /proc/meminfo" output, SReclaimable has 255844 kB, so why
hasn't it been reclaimed when needed?

Note that "sync; echo 3 > /proc/sys/vm/drop_caches" has no effect
on Slab.

-- 
Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


Reply to: