[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: NT and Linux



> -----Original Message-----
> From: King Lee [mailto:king@ultrix6.cs.csubak.edu]
> Sent: Thursday, May 28, 1998 11:29 PM
> To: Leandro Guimaraens Faria Corcete Dutra
> Cc: recipient list not shown; @lists.debian.org@artecon
> Subject: Re: NT and Linux
> 
> 
> 
> 
> On Thu, 28 May 1998, Leandro Guimaraens Faria Corcete Dutra wrote:
> 
> > King Lee wrote:
> > >    1.   Has anyone here had any experience or knowledge
> > >         about software raid. How good is it?
> > >    2.   Does Linux  support hardware raid 5

<<<snipped>>>

> The article from www.osnews.com did say that software raid takes
> up CPU cycles, but it did not say how much. It would seem that if
> the CPU must check for errors on each byte from disk and performance
> would take a big hit.  Perhaps the kernel  checks for errors only
> if it knows that a disk died, and normally there would not
> be a hit.  Does anyone know about CPU hit of software raid.
> Why would anyone buy expensive raid hardware if software
> does the same without too much penalty?
> 
> King Lee

First, the CPU not only checks for errors on reading, it must also
calculate the parity on writes.  In RAID5, spanning 4 disks, for
example,
1/4 of the storage is used to hold parity info.  Data is written in
"stripes" of some size, one stripe per disk, in a "round robin"
sequence.
One stripe will be parity.  In the above 4 disk example, if a stripe
were
16K in size, there would be 48K of data and 16K of parity.  In RAID5,
the
parity stipe will "rotate" between disks, so no single disk is loaded
with
all the parity (this improves performance over RAID4(I believe) where
all
parity is on one disk).  If a disk write is less than 48K, the system
must
read 48K from the disks, make the needed changes, recalculate parity and
write the resulting 64K back to the disks.  If the size is 48K, this
read
of data can be dispensed with.  The system must then only calcualte the
parity and then write the 64K.

This means CPU cycles are needed for SW RAID.  I do not know the impact
in terms of actual numbers, but I can say the main issue is scalability.
In SW RAID, the more RAID subsystems created, the greater the impact on
CPU performance.  In HW RAID, there is no additional impact.  So even if
SW RAID for a single RAID5 subsystem matched HW RAID for the same
config,
there will certainly come a breakeven point, where additional capacity
causes CPU performance degradation in the SW RAID setup.

---
Bob McGowan
i'm:  bob dot mcgowan at artecon dot com


--
To UNSUBSCRIBE, email to debian-user-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org


Reply to: