[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Software vs Hardware RAID 10?



On Sun, Aug 26, 2007 at 01:15:00PM +0000, Douglas A. Tutty wrote:
> If the raid card was PCI-e, would it matter?

Certainly better than plain PCI.  More bandwidth.

> Worst case for the bus would be data to and from the drive's cache.  For
> SATA-II, that's 300 MB/s.  So to write to two drives at once it takes
> 600 MB/s.  Since each lane of a PCI-e has a raw max rate of 250 MB/s,
> each drive needs two lanes to achieve this. So for a 5-port raid card,
> it would need to be x8 (300 MB/s x 5 / 250 MB/s = 6).  Are they
> available?  Or would there be contention within the north/south bridge
> where all the buss lanes meet?
> 
> How do file servers address this?

Well I wouldn't worry about cache speed of the drives since it really
doesn't matter most of the time.  More reasonable is to expect 50 to
75MB/s sustained on a modern large drive.  So with software raid1 you
would use 100 to 150MB/s when writing and 50 to 75MB/s reading, while a
hardware raid card would only need 50 to 75MB/s for either.  Of course
that is assuming the raid card can even work that fast (I have in the
past seen cases where installing a serveraid 4 card running raid 1 was
slower with the same disks than with the serveraid card,  The raid card
of course was simpler to manage since the system saw just one disk
making booting and recovery much simpler, although software raid is
pretty easy to manage too).

You can certainly get x8 cards from areca and 3ware and others.  If your
board has a free x8 or x16 connector you can run such a card in that.

The speed between the north and south bridge (on systems that have one)
should not be a problem.

--
Len Sorensen



Reply to: