[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Arm-netbook] 9nov11: progress on allwinner A10

On 09/11/2011 21:32, Luke Kenneth Casson Leighton wrote:

they were planning to make room for up to 4gb NAND flash *but*...
something just occurred to me / my associates: thailand's under water.

the implications of that are that the factories which used to make
low-end IDE drives and pretty much every SATA drive under 80gb no
longer exist... and probably won't ever be rebuilt.  everyone's moving
to SSDs for low-end, and the prices for larger HDDs are rapidly

now, for free software developers we don't give a rat's arse: there's
always room to cut down to emdebian, or use cramfs, or... whatever:
there's always creative ways to make a bit more room, but i've been
connecting the dots a bit from various peoples' input, talked it over
with my associates and we came up with an idea.

I'm not sure what the point is of having internal flash if you have a SD/uSD
slot AND a SATA port. Dropping internal NAND alltogether seems like an
obvious choice.

  it is... until you work out the speed (he said... not having actually
done it yet!)  SD3.0 UHSv1 (whatever it is) qty 1 is _only_ 4 bits
wide, and maxes out at... well, you can check the wikipedia page, off
top of head i think 150mbits/sec is about the lot.

150Mb/s is plenty. The problem isn't sequential throughput. The problem is random write IOPS, and every last SD card I tried, including the expensive high-end ones from SanDisk, Lexar and Kingston suck when you test their random write IOPS. The expensive ones suck less, granted, but they are still pretty appalling compared to a 5200rpm laptop disk. The problem isn't in MB/s it is in writes/second.

I'm currently doing a final stage rebuild of RHEL6 on ARM and the only reason I got this far with some sanity left is because I am not using SD for scratch space - I am using NBD over the network (SheevaPlugs have Gbit etherhet) pointing at box with 13 SATA disks with ZFS. It is only when you do away with really crap flash (SD cards, CF cards on-board raw NAND, etc.) that you finally realize that the performance of the current generation of ARMs is actually quite awesome.

  by contrast, the NAND flash interface is ... *checks*...8, 16 or
32-bit wide, and you _can_ do "non-conflict" simultaneous 8 bands of
DMA which matches with the 8 NAND select lines, obviously, so someone
has thought about this :)  that NAND flash, i seem to remember,
somewhere, it supports up to 2ns NAND, so frickin 'ell that's 500mhz
max, i don't dare look up the prices on those, but 8 of them all doing
concurrent non-conflicting DMA transfers, i believe the point is made?
:)  probably horrifically expensive, too.

All the more reason to just drop it and have uSD for things that are very write-non-intensive (I have a setup that puts all normal write-intensive things on tmpfs for machines with flash memory so I'm not wasting my flash lifetime and killing the performance as much). For everything else, there is that SATA port.

alain (williams) asked a very pertinent question, he said, "ok yep
count me in, but how do i make any money from this?" and it put me on
the spot and i went, "um, well... how about you do servers but use
these as low-power ones" and then i realised of course, he's a CentOS
maintainer, hosts some packages, so he's going to try CentOS for ARM
and then well if that works, he'll be the maintainer of the ARM port
of CentOS servers.

I may have beaten him to it. I have a beta spin of RHEL6 port for ARM
running right now with all relevant packages patched as required and built.

  yaay!  well then that's gloody grilliant, it means CentOS is a breeze.  yaay!

Yeah, they'd have to change a grand total of one package to make it CentOS instead of RedSleeve branded (because sleeve is what you wear on your ARMs). ;)

Myself and Donald have spent the last two months doing all the hard work.

Frankly, I doubt CentOS would bother (considering how late they were in getting CentOS 6 out, I doubt they'd get the ARM port up and running in less than a year, even if they poach all of the RS patches. That's why I went with Scientific Linux and doubt I will ever look back while there are alternatives.

If anybody is interested in it, drop me a line and I'll notify you when it
is downloadable (probably about a week, two at the most).

then we put two and two together and went, "hang on, these are
effectively blades, why not have a chassis like the ZT-Systems one,
with a gigabit backbone, space for SATA drives, and up to 8
EOMA-PCMCIA-compliant CPU cards, each with 1gb DDR3 RAM and these
Cortex A8s?" it'll all be low-cost, you can get 40gb to 80gb SATA
drives, turn it into a big software RAID or NAS box or a
high-performance (but higher-than-average latency of course)
load-balanced server aka cloud jobbie.

I like this idea - A LOT. I'd certainly be interested in buying some.

at which point i went "oh shit - low-end SATA drives don't bloody
*exist* any more!" :)  [look on ebuyer's site for SATA drives below
£50 - there aren't any].

Well, on something this low power you'd want SSDs anyway.

  yeah, precisely.  the question is: would a bunch of NAND ICs beat the
price or performance of off-the-shelf SSDs with SATA-II interfaces,
and if so, how the heck do i justify it to the factory?

I would be amazed if you get anywhere near the multi-thousand IOPS offered by the current generation of SSDs using raw NAND and FS based wear leveling. I really don't see that happening.

  btw they're set up - psychologically - for "tablets, netbooks, STBs",
their heads would melt if i made any mention of "servers".  so i
believe it would be sensible to get the motherboard made elsewhere /
somehow-else: a large 2-layer board probably would do the trick, even
using KiCAD, hell there's got to be someone around with the expertise
to lay out some tracks to an 8-port Gigabit Ethernet IC, bung on some
SATA interfaces x8 direct to connectors, connect some I2Csx8 to an
EEPROMx8, it's a cut-paste job!

One might hope so. But the path from hope to reality is one often paved with disappointment...

and people could do their own wear-levelling (i hope!), i remember how
everyone keeps bitching about how these bloody SSDs always get in the
way with the stupid, stupid assumption that there's going to be a FAT
or NTFS partition on it.

Hmm... Doing your own wear leveling? How quaint...

  a keyyboarrrdh? how quaint... :)  sorry, my embedded experience dates
back to the ETRAX 100mhz, the guys who did jffs i think it was...
achh, too long ago to remember.

Welcome to the 21st century. :)

OTOH, I am not sure I see the point of having on-board raw
NAND instead of a uSD slot. Yes, even the fastest SD cards are painfully
slow when it comes to write IOPS, but arguably raw NAND won't be all that
much better for any serious workload.

  ... Chip-selects with independent concurrent DMA x8 @ 500mhz, 32-bit?
:)  don't ask me how much those are, i don't know - anyone got a
handle on the prices and capabilities of NAND ICs?

Yes, but what is the life expectancy of those going to be? I don't like built in NAND for things like this because servers like this could stay in service for years and if they fail, replacing a SATA SSD is going to be easier than finding a new CPU module. This might go out of production, while SATA SSDs are unlikely to for another 20 years.

And ultimately, is it going to be substantially cheaper than a similarly sized uSD card? If not, I don't see the point in even considering it.


Reply to: