[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: causes for this?



On Sat, Jun 23, 2018 at 4:15 PM, Gene Heskett <gheskett@shentel.net> wrote:

>> So when you first plug in a flash device, only a few megabytes are
>> actually available for writing, and the controller is busy running
>> self test routines on the rest. Any writes to the untested parts of
>> the flash get queued behind the testing so will be quite slow. Most
>> users would not notice an effect, especially with SD cards in digital
>> cameras because they are powered all the time and only filled
>> gradually.
>
> Sounds plausible, but you'd think they'd want to test it just to stop the
> shipment of bad product.

 pffh, naah.  you can't do tests on flash without actually risking
damaging it.  damage means reduced life.  reduced life means less
confidence from the customer as its capacity is less than what it's
supposed to be.  much better to ship out untested product and let
amazon and other sales front(s) deal with complaints and returns.

 firmware on low-cost (and newly-designed unusual) SSDs is extremely
dodgy.  one of the drives that i tested literally crawled to an
absolute stand-still after a certain sustained amount of parallel
writing (from different processes).  the article went out on slashdot
and i was given some advice about it: stop the parallel write
queueing.  there's a linux kernel parameter somewhere for it...  i
didn't get to try it out unfortunately.

 this was after OCZ had been caught switching on a firmware #define
which they had been TOLD under no circumstances to enable as it causes
data corruption (they wanted to be "faster" than the competition).
the data corruption was so bad it actually in some cases overwrote the
actual firmware *on the drive*, meaning that the SSD was no longer...
an SSD.

 the only reasonably-priced SSDs i trust now are the intel s35xx
series.  other drives such as the toshibas which are also supposed to
have supercapacitors for "enhanced power loss protection", the
supercapacitors simply aren't large enough, so a sustained series of
writes above a certain threshold speed, pull the power and there's not
enough in the supercapacitors to cover the time it takes to save the
cached data.

 only the intel s35xx series has had the work put into it,
technically, to do the job *at a reasonable price*.  i ran a 4-day
test writing several terabytes of data, the power was randomly pulled
at between 7 and 25 second intervals, for a total of six and a half
THOUSAND times, and *not a single byte* was lost.  which is deeply
impressive.

 the s37xx series is by a different team and they use the fuckwit
marvel "consumer" chipset that's so troublesome in kingston, crucial
and other SSDs.

 really not being funny or anything: if you care about your data
(*and* your wallet) just don't buy anything other than intel s35xx
series SSDs.  of course if you have over $10k to spend there are
plenty of data-centre quality SSDs.

l.


Reply to: