[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Reasonably simple setup for 1TB HDD and 250GB M.2 NVMe SSD



On Mon, Jan 03, 2022 at 10:33:59AM -0500, Dan Ritter wrote:
Michael Stone wrote:
On Mon, Jan 03, 2022 at 08:42:29AM -0300, Jorge P. de Morais Neto wrote:
> Indeed I use such high compression to prolong SSD lifetime.

This is probably misguided and useless at best, at worst you're causing
additional writes because compressed data is generally hard to modify in
place without rewriting substantial portions. Concerns about SSD life are
generally overblown unless you've got really unusual usage patterns (in
which case compressing things is unlikely to make a difference).

SSDs don't modify in place. The compression is probably good for
overall bandwidth of I/O, depending on CPU utilization.

I'm aware of how SSDs work. The question is how much rewriting is triggered by altering a byte in a compressed file, and how that interacts with the SSD's erase blocks and SLC cache. In most cases the effects are probably negligable (at least as much as any possible improvement from using compression) but in pathological cases it might increase rather than decrease write amplification. Anyway, there may be reasons to want to use compression on a given device, but SSD longevity shouldn't be one of them.

In my experience it doesn't do much for performance because in places *where it matters* compression is usually being done at a higher level anyway, and much more efficiently. That said, it can help sometimes but it's going to be extremely application-dependent.

For reference, my main desktop which tracks debian unstable and gets pretty
much constant updates, does package builds, etc., has after several years
used...2% of its primary SSDs write capacity. Most modern SSDs will never be
used anywhere close to their limits before being discarded as functionally
obsolete. Just don't worry about it and focus on other things.

This is largely true on desktops and laptops, not so true on
servers.

And, of course, we're talking about a laptop.

If we were talking about servers it wouldn't generally be a runtime concern on a modern system if write volume is accounted for when provisioning--there are extremely long-lived SSDs, they just cost more. If you put commodity laptop HDs in a server and write literally hundreds of TBs per day to them they also tend to fail faster than higher grade HDs (just without the counter telling you when they'll stop) so that's not a novel concern to account for when designing a system. If you're running servers you should probably monitor SSD health, but that's not different in practical terms than monitoring SMART predictive failures for HDs and dealing with them as necessary. In reality, even with all the hand-wringing about SSD write limits, modern parts are much more reliable than HDs. Most of the bad experiences with SSDs that get so much attention (excepting bad batches, which can happen with anything) relate to devices many generations obsolete.


Reply to: