[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Re: Re: Please help me to evaluate flash/ssd life using vmstat -d




On Fri, 2011-02-04 at 18:30 -0600, R. Ramesh wrote:

> I do not have SSD. I have a USB flash drive - went cheap on this :-)
> > Regardless of the above, still every write by the kernel has to be > translated in to NAND writes. I have read in more than one place that > these writes will be in units of erase-block size regardless of the > kernel IO size.

I could be wrong, but I've always understood that you could write less
than an erase-block size of data, just that the part of NAND written to
can't have previously been used since the last erase. E.g.

- Erase 128kB block
- Write 64kB to first half of block
- Later write 64kB to second half of block
- Need to erase whole 128kB block before it can be written to again.

> I am simply trying to map the X kernel writes in to Y > erase-block writes. Note that I do not map it specific erase-block, but > to some erase-block. So I only worry about the block counts not the > block addresses. That is why my calculations are based on total number > of erase-blocks writes available (= 16G/512k*10000 = 327680000) before > the device goes bad. So to me the life of flash is 327680000 erase-block > writes. Now how many hours is it? To answer this, I need to understand > what vmstat -d prints.

If I'm right about partial writes to a NAND erase-block, then I don't
think that you need to try and factor in erase block sizes, just use the
fact that for each byte written a byte must have been previously erased.
Assuming that the flash controller only erases full blocks (it won't be
that efficient though) then you just calculate the total amount of data
that can be written to the disk. In your case, 16GB disk * 10000 erases
= 160TB data before the disk expires.

In my previous job when we were thinking about wear on MMC cards due to
demand paging, the calculations showed that if you wrote continuously to
the card at it's maximum rate supported it would last several years. It
was at that point I stopped worrying about flash wear :-)

I also think that you can write less than an erase block. But, calculations become harder if I take that approach because I need to figure out which kernel writes will require erase and which will not. I do not know how to come up with that magic. So I wanted to use vmstat and figure out in a different way.

Before that, let us try your approach above (at the end). It suggests about writing at full speed and see how long it takes for 160TB. My flash has advertised rate of 15MB/sec (sequential, so the best it can do). So this gives us 160*1024*1024/15 sec or 129.5 days. That does seem right. So this method is unrealistically conservative.

So what does vmstat -d tell me? Is the number of IO under "total" column supposed to be the number of IOs issued to the controller with each IO being contiguous N sectors? If not then my 24 years is the best I can come up with.

Ramesh


Reply to: