[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: hard- or software-raid?

On Fri, 24 Jan 2003 06:43, thing wrote:
> OK, For booting, I suggest getting a uw or better ie (u2w) scsi hardware
> raid controller (AMI megatrends seem linux friendly)  and 2 x 4gig ultra

What is the benefit of u2w SCSI for 4G disks?  4G disks are terribly slow, any 
sort of interface should drive them at maximum speed.

> and are slower. This will be a robust boot system, software raid is not
> any good for booting.

The only potential problem with booting from software RAID is if the primary 
disk dies.  If you have hardware hot-swap then that's only a minor issue 
(just unplug the dead disk before your next boot).  Hardware hot-swap with 
software RAID beats hardware RAID with bolted in disks every time.

> Since speed is not your issue I suggest Raid 5 using software raid for
> the data. Ive found it no worse performance wise than hardware raid (on

If speed is not the issue then two disks in a RAID-1 is the way to go.  The 
less disks you have the less things to break.

Apart from the issue of battery backed write-back caches for RAID-5 software 
RAID will deliver at least equal performance of any hardware RAID on the 
market, and much better then the vast majority of hardware RAID devices.  
Most hardware RAID is cheap and nasty and performs accordingly.  If you spend 
less than $1000 on your hardware RAID then it will probably suck.

> ide anyway) and way cheaper. Ive pulled a disk out of a software raid 5
> setup and re-inserted it and the system recovered fine (that was scsi
> mind).  These days CPU's are not usually the bottleneck in server
> performamce so the penalty of the raid 5 calculations on the CPU seems
> insignificant.

The issue is the performance of system buses on cheaper machines.  If you have 
a cheap desktop machine then you probably can't run two new disks at maximum 
speed at the same time, regardless of how you connect them.  If you have a 
server class machine then the only bottleneck should be the disks and the 
RAID controller.

> If you want to improve performance only put 1 ide disk per channel, this
> means an extra ide controller (ie 2, assuming 2 channels per
> controller), but there should be a speed improvement.

The speed performance probably isn't great.  Last time I benchmarked this on 
an Athlon 800 with ATA-66 drives I found very little difference between two 
IDE drives on the same channel and two drives on different channel.  The main 
difference was between running only one drive, and running two drives on 
different channels (putting the drives on the same channel did not lose much 
extra performance).  The main bottleneck was on the motherboard.

http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page

Reply to: