[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: NT and Linux



Thanks Bob McGowan  for your very informative reply.  I gather that
   1. Software raid is OK if problem is I-O bound, i.e.,
	CPU would normally be idle waiting for I-O.
   2. If we have multiple subsystems, we increase the
  	the I-O bandwidth, and now the CPU may not
	be keep up with the I-O.  In general, increasing
	I-O turns I-O bound problem into CPU bound program.
   3. Software raid 5 may be OK for workload with lots of
	reads, but run into trouble if workload does lots
	of writes.
   4. Software raid 5 is more efficient for large files.

Is the above more or less correct.
King


On Mon, 1 Jun 1998, Bob McGowan wrote:

> > 
> > 
> > On Thu, 28 May 1998, Leandro Guimaraens Faria Corcete Dutra wrote:
> > 
> <<<snipped>>>
> 
> > The article from www.osnews.com did say that software raid takes
> > up CPU cycles, but it did not say how much. It would seem that if
> > the CPU must check for errors on each byte from disk and performance
> > would take a big hit.  Perhaps the kernel  checks for errors only
> > if it knows that a disk died, and normally there would not
> > be a hit.  Does anyone know about CPU hit of software raid.
> > Why would anyone buy expensive raid hardware if software
> > does the same without too much penalty?
> > 
> > King Lee
> 
> First, the CPU not only checks for errors on reading, it must also
> calculate the parity on writes.  In RAID5, spanning 4 disks, for
> example,
> 1/4 of the storage is used to hold parity info.  Data is written in
> "stripes" of some size, one stripe per disk, in a "round robin"
> sequence.
> One stripe will be parity.  In the above 4 disk example, if a stripe
> were
> 16K in size, there would be 48K of data and 16K of parity.  In RAID5,
> the
> parity stipe will "rotate" between disks, so no single disk is loaded
> with
> all the parity (this improves performance over RAID4(I believe) where
> all
> parity is on one disk).  If a disk write is less than 48K, the system
> must
> read 48K from the disks, make the needed changes, recalculate parity and
> write the resulting 64K back to the disks.  If the size is 48K, this
> read
> of data can be dispensed with.  The system must then only calcualte the
> parity and then write the 64K.
> 
> This means CPU cycles are needed for SW RAID.  I do not know the impact
> in terms of actual numbers, but I can say the main issue is scalability.
> In SW RAID, the more RAID subsystems created, the greater the impact on
> CPU performance.  In HW RAID, there is no additional impact.  So even if
> SW RAID for a single RAID5 subsystem matched HW RAID for the same
> config,
> there will certainly come a breakeven point, where additional capacity
> causes CPU performance degradation in the SW RAID setup.
> 
> ---
> Bob McGowan
> i'm:  bob dot mcgowan at artecon dot com
> 
> 
> --
> To UNSUBSCRIBE, email to debian-user-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> 
> 


--
To UNSUBSCRIBE, email to debian-user-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org


Reply to: