[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: nvme SSD and poor performance



Pierre Willaime writes:

I have a nvme SSD (CAZ-82512-Q11 NVMe LITEON 512GB) on debian stable (bulleye now).

For a long time, I suffer poor I/O performances which slow down a lot of tasks (apt upgrade when unpacking for example).

I am now trying to fix this issue.

Using fstrim seems to restore speed. There are always many GiB which are reduced :

[...]

but few minutes later, there are already 1.2 Gib to trim again :

	#  fstrim -v /
	/ : 1,2 GiB (1235369984 octets) réduits

Is it a good idea to trim, if yes how (and how often)?

Some people use fstrim as a cron job, some other add "discard" option to the /etc/fstab / line. I do not know what is the best if any. I also read triming frequently could reduce the ssd life.

I do `fstrim` once per week by a minimalistic and custom script as cron job:
https://github.com/m7a/lp-ssd-optimization

There is no need for custom scripts anymore, you can nowdays enable the timer from `util-linux` without hassle:

	# systemctl enable fstrim.timer

This will perform the trim once per week by default.

When the use of SSDs increased people tried out the `discard` options and found them to have strange performance characteristics, potential negative effects on SSD life and in some cases even data corruption (?) Back then it was recommended to use the periodic `fstrim` instead. I do not know if any of the issues with discard are still there today.

I also noticed many I/O access from jbd2 and kworker such as :

	# iotop -bktoqqq -d .5
11:11:16 364 be/3 root 0.00 K/s 7.69 K/s 0.00 % 23.64 % [jbd2/nvme0n1p2-] 11:11:16 8 be/4 root 0.00 K/s 0.00 K/s 0.00 % 25.52 % [kworker/u32:0-flush-259:0]

[...]

I do not know what to do for kworker and if it is a normal behavior. For jdb2, I have read it is filesystem (ext4 here) journal.

I highly recommend you to find out what exactly is causing the high number of I/O operations. Usually there is a userspace process responsible (or a RAID resync operation) for all the I/O which is then processed by the kernel threads you see in iotop.

I usually look at `atop -a 4` (package `atop`) for half a minute or so to find out what processes are active on the system. It is possible that something is amiss and causing an exceedingly high I/O load leading to the performance degradation you observe.

[...]

P-S: If triming it is needed for ssd, why debian do not trim by default?

Detecting reliably if the current system has SSDs that would benefit from trimming AND that the user has not taken their own measures is difficult. I guess this might be the reason for there not being an automatism, but you can enable the systemd timer suggested above with a single command.

HTH
Linux-Fan

öö

Attachment: pgpyefRwXCrdE.pgp
Description: PGP signature


Reply to: