[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Swap space choice on a SSD <- Current best practice on?



Hello,

On Wed, Feb 13, 2019 at 04:23:56PM -0500, Dan Ritter wrote:
> "Over-provisioning often takes away from user capacity, either
> temporarily or permanently, but it gives back reduced write
> amplification, increased endurance, and increased performance."
> 
> Increased endurance is increased longevity.

That is also my understanding and matches many articles advising how to
choose the best enterprise SSD for a particular workload. However, I
know that SSDs are a lot more "black box" than your typical HDD so I
think especially with consumer devices it could be hard to generalise
and reason about. At that level the device specs often do not specify
numbers for "terabytes written" or "drive writes per day".

It can also be surprising sometimes how little is written. For example,
I have some servers with flash memory for their operating system
install, with data on other storage:

    https://www.supermicro.com/products/nfo/SATADOM.cfm

At the 16GB capacity these offer only 17TB of writes over 5 years and I
was a bit worried, so I was thinking of spending some effort making sure
that things which are regularly doing writes do so to a RAM disk
instead.

Luckily there's a SMART attribute (241) you can use to tell how much has
been written to the drive to date and when I checked that I found the
servers were typically writing only ~14GiB per month. So that would take
about 100 years to reach 17TB! Of course, the 5 year warranty covers
other factors too.

It all depends on use case, as clearly there are uses that are
write-intensive which would burn through 17TB in a matter of hours. I do
not put swap on these devices. Measuring is still essential in my view,
but things are indeed a lot easier than they were a decade ago.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting


Reply to: