Re: RAID Suggestion for webserver
On Sun, 2002-02-10 at 23:09, Jason Lim wrote:
> > > Okay, as you said, with RAID10 and 4 40G HDs, usable space is 80Gs.
> > >
> > > On the other hand, with RAID5 and 3 40G HDs, usable space is also
> > > with 1 spare HD for rebuilding.
> > >
> > > The question becomes... which provides more performance and is more
> > > reliable?
> > RAID10 will give you the most performance. Not only do you have 4 disks
> > working for you all the time, instead of 3 with RAID5, you (or more
> > accurate: your CPU) also don't have to calculate the parity which is
> > used by RAID5.
> The CPU won't be handling this... the 3ware RAID card (hardware) will
> perform the parity calculations, so RAID 5 won't cause that type of
> slowdown due to additional CPU utilization.
Mmm, this is one of the rare IDE RAID cards that are true hardware RAID.
But you still have only 3 disks doing the work.
How about the number of disk-IO operations needed to perform a read or
> > Both will survive a 1 disk crash with no problems and both will appear
> > as a RAID0 array when running in degraded mode. However, the reliability
> > is different when a second disk fails. In RAID5 with spare you are out
> > of luck when a second disk fails while the spare is rebuilding.
> > With RAID10 and 1 failed disk, you only have the disk that is in the
> > same stripe as the failed disk that is save to fail.
> > I'm not sure if the raid card supports a stripe of two mirrors. This
> > setup will survive a 2 disk failure.
> With the RAID5 with 3 disks and 1 spare, the only time the array would be
> vunerable would be during reconstruction onto the spare disk. Once that is
> done, the array will be fully restored, and could survive another failure
> in any disk.
> As you mentioned, the RAID10 with 4 disks could also survive 2 failures,
> however (again as you mentioned) the 2nd disk that failures cannot be part
> of the same stripe as the original failed disk.
> Performance while the RAID5 array is degraded won't be too bad due to the
> fact this is hardware RAID and not software RAID, and the hardware's
> dedicated RAID chips will handle the computations.
In degraded mode there are no computations to be made. It's just the
same as RAID0. The card just reads whatever is on the two disks and only
sorts the blocks in the right order. The same is with the RAID10 setup.
Although this setup still has 3 disks, one of those 3 only has half of a
stripe and can't be used in the array, so in this case the degraded
RAID10 is also something like a RAID0.
The hardware's chips will be used for rebuilding the array on the spare
disk. How much this will impact the normal operation depends on how fast
you want the rebuild to take place. And even with dedicated chips this
will degrade normal operation because it needs to read all information
on the 2 disks in order to calculate what to put on the 3rd disk.
> It seems RAID5 would be a safer solution as long as another failure does
> not occur during the reconstruction onto the spare. Hum... how long would
> the reconstruction take for a 40G hd? I'm guessing 30-40 minutes? Would
> that be about right?
I'm not sure how fast today's disks really are, but remember that the
rebuild time also depends on how much normal work the array has to do (I
asume the card has a setting that allows you to tune how the card
divides the rebuilding work versus the normal work).