[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: IDE ATA RAID under Linux



jennyw said:

> Anyway, anyone care to share experiences (over a reasonable amount of
> time with significant use) of various RAID hardware solutions?  Trying  to
> figure out which one to buy ...

I have had a fair amount of experience with the 3Ware 6800 series raid
cards(1 4-port and 3 8-port cards), running under debian 2.2(then later
3.0), running a fairly heavily patched 2.2.19 kernel, with reiserfs
as the filesystem.

they worked ok, though any time a disk failed the system would kernel
panic(reiserfs would panic). And I had a lot of disk failures, probably
10 over the course of a year, most of which were IBM 75GXP disks. Later
migrated to Maxtor disks, failure rate dropped quite a bit but still
had a disk die here and there.

3ware support was very friendly and responsive for the most part. If
I were needing hardware IDE raid again I would go with 3ware. though
for more important applications I would still reccomend SCSI 10x
over IDE. all of the systems I had with 3ware controllers were very
non critical, it wasn't an issue if the system crashed if a disk
died(though it was a pain in the ass). I STRONGLY reccomend putting
the disks in at least certified cold-swap drive bays(my systems didn't
have enough 5.25" bays to have a bay for each disk). Disks will fail,
it took me probably an hour and a half to two hours to replace a
disk on my systems, reaaaaaaaally annoying task. 3ware has fancy
hot-swap drive bays as well, not sure on pricing(my company at the
time wouldn't go for them). 3ware techs told me many of the
cheap IDE "hotswap" bays often cause problems, depending on the
type of cabling used(the biggest problem was the drive would
lose power for an instant).

another option, perhaps better, is going for IDE disks with an
IDE->SCSI adapter, and using a SCSI raid card. probably a good
balance in pricepoints between the 2(2 being all IDE raid vs
all SCSI raid). I've had a few disks fail in SCSI raid systems
without a glitch. I've never used an IDE->SCSI adapter myself.

Software raid also works well, but depending on the # of disks
you need you may need a hardware raid card(e.g. 8 drives on 1
3ware card using 1 IRQ).

if you go with IDE raid I also reccomend keeping at least 1 or
2 disks on site for replacements(on top of any hot spares if
your using them). Getting a disk RMA'd can take some time, IBM
it last took me maybe 3 weeks, maxtor maybe 1 or 2 weeks. And
yes there was 1 time when I had 2 disks die within a 48 hour
period on a 5-disk raid 5 system taking the whole array with
it(first disk died on friday afternoon after 6PM, second disk
died sunday morning). luckily as above this wasn't a big
issue other then a pain in the ass for the IT people to rebuild
the system. the systems were backup systems, storing data from
the production systems, so nothing was lost.

I was at an interview a few weeks ago and the place there used
3ware for some of their stuff, they commented on how the system
wasn't compadible with some western digital drives without
changing some mode on the disk ahead of time(sleep mode or
something).

nate





Reply to: