Re: hard- or software-raid?
On Fri, 2003-01-24 at 06:43, thing wrote:
> Tinus Nijmeijers wrote:
> >I'm building a server that needs about 200G of harddisk space and the
> >data has to be safe. If I need to replace a faulty hd and get downtime
> >that's fine. Speed is not an issue.
> >The system will boot of a scsi HD, I have a backup boot disk available.
> >Disks (couple of 120G IDE or something) will be in 1+0, raid5 or raid6
> >(does software raid do raid6?)
> >Is there any reason to use hardware-raid over software-raid in this
> OK, For booting, I suggest getting a uw or better ie (u2w) scsi hardware
> raid controller (AMI megatrends seem linux friendly) and 2 x 4gig ultra
> wide (uw) or bigger disks (raid1 ~ mirrored), you dont need bigger, but
> bigger disks will be younger and hopefully last longer. An improvement
> would be 3 x 4 gig disks and have one as a hot spare to the first 2. I
> wouldnt go older/smaller than 4 gig as 2 gig disks are getting very old
> and are slower. This will be a robust boot system, software raid is not
> any good for booting.
> Ive never heard of raid 6 (commercially anyway).
> Since speed is not your issue I suggest Raid 5 using software raid for
> the data. Ive found it no worse performance wise than hardware raid (on
> ide anyway) and way cheaper. Ive pulled a disk out of a software raid 5
> setup and re-inserted it and the system recovered fine (that was scsi
> mind). These days CPU's are not usually the bottleneck in server
> performamce so the penalty of the raid 5 calculations on the CPU seems
> Raid 5 needs at least 3 disks, as one disk is lost on parity, so your
> options for 200 gig are 3 x 100 or 120 gig drives giving you 200 ~
> 240gig of usuable space or 4 x 80 gig disks also giving you 240gig of
> raid 5.
> 3 disks is good as then if you so choose you can add an extra disk as a
> hot spare, this way if one dies you rebuild on line and swap the dead
> one out at your convienience. That means paying for an extra disk mind....
> If you want to improve performance only put 1 ide disk per channel, this
> means an extra ide controller (ie 2, assuming 2 channels per
> controller), but there should be a speed improvement.
My question kind'a stands: If the only thing I ask of it is for the data
to be safe (no speed or "no downtime!" issues) is there any reason to
use hardware over software raid?
I do not care if I have to take the server down for an hour (or 2, or 3)
to replace a disk, be it a raid disk or boot disk. I have plenty of
time, I could even run down to the store, get a new bootdisk, install
debian and be up and running in 2 hours. no problem.
ONLY thing that is important is that the data needs to be safe. if 2 of
the raid-disks fail I need the data to be safe.
(it is, of course, a budget thing. In case of fire I have tapes to get
the data back, there's downtime involved there. So I do care about
downtime. Just that with disks being as cheap as they are I was thinking
that a software raid is soooo cheap to build that maybe that's worth the
extra cash for the 3 extra disks that I need to buy.
scenario 1: boot of scsi, data is on a 200G IDE, tape backup
scenario 2: boot of scsi, data on 4x80G IDE (software-raid5), tape
backup = + EURO 100
scenario 3: boot of scsi, data on 4x80G IDE (hardware-raid5), tape
backup = + EURO 500
scenario 4: boot of scsi, data on 4x80G SCSI (hardware-raid5), tape
backup = + EURO 2200
so for close to nothing (E 100) extra I get software raid.
Is hardware raid "safer"?
(I do not think it is, I'm just waiting for someone to tell me I'm being