[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Software VS Hardware Raid



On Mon, 28 Jan 2002, Jason Lim wrote:

> detected the drive, but during the part that "lilo: " is supposed to come
> up, nothing did. The disk kept grinding and grinding, and eventually asked
> for a floppy. I was hoping that the 2nd, working drive in the raid array
> would kick in any moment, but that didn't happen. Everything stalled right
> there. 
 
  Lilo would have to know about your RAID setup (and of course it doesn't), 
  that's why it's not recommended to use software RAID on the root partition.

  I'd say software RAID should be used on data partitions, and keep a backup
  of your root partition somewhere, so that when the disk holding it fails,
  you just swap in a new one and recover your root backup. When a disk holding
  the data partition (on sw/raid) fails I assume it'd work as advertised.

  You can't be 24x7-high-availability with software raid only, there's always
  some down time involved with it, or at least a higher risk of downtime than
  with hardware raid.

> If the bad drive is put in by itself, after a while the disk is
> failed and it tries to boot by floppy. 

   Does Lilo ever appear?  or does the BIOS ask itself for a floppy disk?
   if LILO  Loading Linux......   does not appear, then your HD is never
   going to make it as a root partition holder.


> cable btw. The BIOS had the usual settings allowing me to set the boot
> order (Floppy first, CDrom next, hard disk 0, then network (no, i can't
> put hard disk 1, I wish i could), and finally had "Boot other devices" set
> to yes.

    What would happen if you plug the faulty drive on the second HD instead
    of the first one?  so that lilo boots... ??

> 
> My question: if this was hardware RAID 1... would this have happened?
> Would the hardware RAID controller recognise the problem, and only stop
> briefly, then try the second disk automatically and transparently?

 In my experience (ICP-Vortex fibrechannel and scsi), yes the hardware RAID does
 spot the faulty drive and kicks in with the sane one immediately, the OS is
 alerted that a drive is at fault in the array, but apart from that everything
 runs smoothly.

 Depending on your syslog configuration, it whines that you should change the
 faulty drive with a good one until you do.


> Case 2)
> I simulated errors by connecting a flaky IDE cable to one of the drives. I
> was hoping the software RAID would either compensate by doing most of it's
> reading from the good drive (with a good cable) or labelling the flaky
> cable/drive as bad, but instead it started slowing down, and writing to
> the array was taking much longer and strange errors starting occurring
> during writing.
> 
> My question: would hardware raid have handled this situation any better?


   Again, in my experience: definitely yes.

> 
> And as for Hardware IDE raid, which is better... Promise or HighPoint?
> promise seems to be better supported in the kernel, but I'm not so sure.
> What happens when (for example) a disk in the array fails? How do you
> control the hardware raid so you can control a rebuild? And for Promise,
> HighPoint, etc., what are the devices going to be called (/dev/hde? or
> maybe /dev/raid/array1?)

Dunno about IDE RAID, but with ICP-Vortex both FC and SCSI, you get a nifty 
little console application (icpcon) which allows you to manage every feature of 
the hardware, you can add/remove/modify arrays, change raid levels in an array,
monitor IO and cache in physical/host/array drives, rescan the bus for new
disks/devices, change cluster/non-shared settings and a big etc. Actually
icpcon does everything that the controller BIOS allows, with the same
'interface' but on the shell.

I assume that Promise or whatever would have an application that would allow
to mangle the arrays or at least monitor them...  but then again if you don't
have hot-swap capability there isn't much that you can change once the system
is up and running.

Although I think at comdex I saw some IDE RAID boxes with hot-swap bays, I
don't know how commercial those might be as opposed to SCA/hotswap scsi which
seems to be everywhere now.


              Jose



Reply to: