[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Home Directory in SSD



Every piece of electronics has an insipient failure rate.  That is, plug it in and within minutes or slightly longer, pffft, its gone.

Examples include lightbulbs, cellphones, hard disks, power supplies, mother boards and memory dimms.

And it includes SDDs.

If your SDD lasts the week, it will last you for 5 to 7 years of use at a level = 30 terrabytes of I/O per day, 365 days per year.  Google "torture tests" for the article about testing to destruction of SDDs. 

ALL SDDs achieved 0.8 of a petabyte of transfer before indicating signs of distress (using smart tool in linux). Thats more than 5 years of heavy use.

Time to put this topic to bed.
 
Regards

 Leslie
Mr. Leslie Satenstein
Montréal Québec, Canada




From: Martin Steigerwald <martin@lichtvoll.de>
To: debian-laptop@lists.debian.org
Sent: Friday, February 12, 2016 6:49 AM
Subject: Re: Home Directory in SSD

Am Freitag, 12. Februar 2016, 11:48:10 CET schrieb herve:

> On 11/02/2016 22:25, Martin Steigerwald wrote:
> > I don´t think it is proven that SSDs fail earlier than HDDs. So far none of
> > the SSDs I use have failed and one is almost 5 years, still thinking about
> > itself that it is actually almost new according to SMART data. And the only
> > reason it isn´t older is that it is the first SSD I got. I expect it to live
> > on for years to come.
> >
> > So do you have any factual data to prove your claim?
> >
> > So far I didn´t see any proof that SSDs fail more often or earlier than
> > harddisks.
> >
> > Thanks,
>
> There can be any proof or factual data yet because this is a very young
> technology. But what we know is that there is a limited number of write
> cycle. Of course hdd can fail with mechanical problems ssd don't have.
>  From what i read, ssd are much much faster, but, at the time i read,
> hdd can be more reliable. And the cost of reliable ssd are no comparable
> with hdd.


Heise and others tried to destroy SSDs by writing to it. And they found out
that it is not easy to do so.

The Intel SSD 320 is rated for 20 GiB of writes a *day* for a minimum of
5 years usable life-time. With usable they mean: It remains fast.

Now I ask you: Do you have 20 GiB each day of writes? That is 20 *
365 = 7300 GiB or about 7,12 TiB. Now this Intel SSD is almost 5 years old.
That will make almost 7,12*5 = 35,6 TiB.

I do have:

241 Host_Writes_32MiB      0x0032  100  100  000    Old_age  Always      -      1000827

Thats 1000827 * 32 MiB = about 30 TiB. I admit thats pretty close and
sometimes I wonder about Plasma 5 desktop with KDEPIM + Akonadi + Baloo and
before that Nepomuk just writing a tiny bit much onto the SSD. But I also
compile KDE Frameworks + KDEPIM + Linux Kernel from source and that adds more
I/O.

But my main point is: The SSD wrote about 30 TiB.

Yet, according to smartctl -a it thinks:

233 Media_Wearout_Indicator 0x0032  098  098  000    Old_age  Always      -      0

this media wearout indicator which according to Intel is related to write
cycle consumption start at the value 100. Now it is 098. So the SSD considers
itself to be almost new.

Intel writes in a PDF about the media wearout indicator[1]:

> The value of the E9 SMART attribute is defined by Intel as the media wearout
> indicator of an Intel SSD. When the value reads 1, this indicates that the
> SSD is reaching the predetermined maximum media wearout limit. Intel
> recommends that you replace the SSD at this point or back up the SSD to
> help prevent the loss of data.

And in addition this:

> E9 SMART Attribute
> The E9 SMART attribute reports a
> normalized value of 100 (when the SSD
> is brand new out of the factory) and
> declines to a minimum value of 1.
> The normalized value decreases as the
> NAND erase cycles increase from 0 to the
> maximum-rated cycles. Once the
> normalized value reaches 1, the number
> will not decrease, although it is likely that
> additional wear can be put on the device.
> Figure 1 shows how the value of the E9
> attribute decreases over time in a sample
> usage model with a consistent workload.


Reply to: