[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Arm-netbook] 9nov11: progress on allwinner A10



+++ Gordan Bobic [2011-11-09 21:52 +0000]:
> On 09/11/2011 21:32, Luke Kenneth Casson Leighton wrote:
> 
> >>I'm not sure what the point is of having internal flash if you have a SD/uSD
> >>slot AND a SATA port. Dropping internal NAND alltogether seems like an
> >>obvious choice.

Depends what you want to use the things for. I agree it's not very
useful for servers, but for more embedded stuff on-board NAND is great. 

SD is useful to make the image-installing process nice and simple, but
everything else about it except the price is terrible. 

> >  it is... until you work out the speed (he said... not having actually
> >done it yet!)  SD3.0 UHSv1 (whatever it is) qty 1 is _only_ 4 bits
> >wide, and maxes out at... well, you can check the wikipedia page, off
> >top of head i think 150mbits/sec is about the lot.
> 
> 150Mb/s is plenty. The problem isn't sequential throughput. The
> problem is random write IOPS, and every last SD card I tried,
> including the expensive high-end ones from SanDisk, Lexar and
> Kingston suck when you test their random write IOPS. The expensive
> ones suck less, granted, but they are still pretty appalling
> compared to a 5200rpm laptop disk. The problem isn't in MB/s it is
> in writes/second.

Yes, SD is terrible, but that's the 4-bit interface and the shitty
controllers. Raw NAND doesn't have to be terrible in the same way.
However it can be, and often is, if you don't design the interface right.


> >>Well, on something this low power you'd want SSDs anyway.
> >
> >  yeah, precisely.  the question is: would a bunch of NAND ICs beat the
> >price or performance of off-the-shelf SSDs with SATA-II interfaces,
> >and if so, how the heck do i justify it to the factory?
> 
> I would be amazed if you get anywhere near the multi-thousand IOPS
> offered by the current generation of SSDs using raw NAND and FS
> based wear leveling. I really don't see that happening.

This is an interesting question. That SSD is just talking to a load of
NAND, so what did they do to make it fast? We do know that SD (and
CF)-card controllers generally do an appalling job. I don't have much
experience of SSDs but they do seem to be a much better job on the
whole. Given reasonable bus bandwidth, the question is can we do a
better/adequate job with the CPU+mem+hardware ECC than they do with
the controller in the SSD device (which is no doubt a little
arm+mem+hardware ECC). And we have the advantage that we can control
the filesystem behaviour too.

It is certainly true that just connecting NAND direct to the CPU
_will_ be really slow because you have to do all the ECC calcs
everytime you read or write enything in software. That's painfully
slow. You need something that does the ECC for you on the fly. I don't
know if modern NAND chips provide this function? If not there has to
be a CPLD in there. If we can't get that then onboard-NAND is going to
be _really_ slow, and IMHO not worth doing. If we need help with this
aspect of the design the YAFFS people will know.

> >>>and people could do their own wear-levelling (i hope!), i remember how
> >>>everyone keeps bitching about how these bloody SSDs always get in the
> >>>way with the stupid, stupid assumption that there's going to be a FAT
> >>>or NTFS partition on it.
> >>
> >>Hmm... Doing your own wear leveling? How quaint...

This gives you the opportunity to get it right. Early CF-card NAND
could be killed in 4 boots if formatted to ext3 because the card was
doing the wear-levelling (and assuming FAT). Things have improved
greatly since then, but I'm quite sure that purpose-designed
file-systems (UBIFS, YAFFS, LOGFS, BTRFS are best candidates) will do
a much better job then cheap controllers. They may or may not do a
better job than expensive (SATA SSD) controllers - I don't know enough
to say.

> Yes, but what is the life expectancy of those going to be? 

The 1G raw NAND mounted on my wall running plain debian and
datalogging every 30 seconds (in a balloonboard) has been running for
3 years now with no notable degredation. Raw NAND can last a great
deal longer than the feeble lifetimes we get from SD because we can
wear-level it properly, and not rely on a fixed number of bad-block
substitutes etc.

SLC NAND lasts longer than MLC NAND. high-density NAND is always MLC
these days, although I gather that newer devices let you partition the
device to have a segment working as SLC for long lifetime (the bit
with the rootfs on it) and another working as MLC (with all the data
on it).

> And ultimately, is it going to be substantially cheaper than a
> similarly sized uSD card? If not, I don't see the point in even
> considering it.

IT'll be much more expensive than an SD card (or eMMC I expect) (those
things are insanely cheap). It will also be orders of magnitude more
reliable, and if designed properly should be a great deal faster. I'm
not sure that's enough to make it worthwhile over SATA SSD, but
personally I like real NAND I can run a non-shit filesystem on at a
reasonable speed, so I do think it's an idea worth considering.

And don't forget that SSD's are fairly expensive too (about 1-2GBP per
GB). 

Wookey
-- 
Principal hats:  Linaro, Emdebian, Wookware, Balloonboard, ARM
http://wookware.org/


Reply to: