[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID & Hard disk performance



God.. this is turning into a war... I think this will be my last post on the subject

When running RAID MTBF is not such a big deal... Unless you have a several racks of servers in 2U cases... 40-50 servers.. Would you rather drop 1 drive every month or 1 drive every year?? In a single machine this isn't too much of a problem. But as numbers increase you spend more and more time in the server room replacing drive and rebuilding arrays.

At 03:09 PM 11/6/01 +0100, you wrote:
On Tue, 6 Nov 2001 07:26, Dave Watkins wrote:
> Not to start a holy war, but there are real reasons to use SCSI.
>
> The big ones are
>
> Much larger MTBF,

Mean Time Between Failures is not such a big deal when you run RAID.  As long
as you don't have two drives fail at the same time.  Cheaper IDE disks make
RAID-10 more viable, RAID-10 allows two disks to fail at the same time as
long as they aren't a matched pair.  So a RAID-10 of IDE disks should give
you more safety than a RAID-5 of SCSI.

> faster access times due to higher spindle speeds, better

When doing some tests on a Mylex DAC 960 controller and a Dual P3-800 machine
I found speed severely limited by the DAC.  The performance on bulk IO for
the 10K rpm Ultra2 SCSI drives was much less than that of ATA-66 drives.

That was a problem with your controller then. Not the technology and bus system.

For example head over to Seagate's web site

http://www.seagate.com/support/kb/presales/performance.html

http://www.seagate.com/docs/pdf/training/SG_SCSI.pdf

You also mention on your site that a typical SCSI drive can only sustain 30MB/sec so cannot fill a SCSI bus running at 160MB/sec. The difference between SCSI and IDE is that SCSI can have multiple transfers at once. Hence a 6 drive system could easily fill the bus. In fact with too many more drives/channels you start filling the PCI bus and have to start looking at PCI 64/66.

IDE on the other hand cannot have multiple transfers at once.

You'll also find that SCSI and IDE sizes are not identical. SCSI drive have approx 9GB per platter and IDE about 10GB. You can find IDE drives in 20.4, 30.6 etc etc. SCSI on the other hand come in 18GB, 36GB etc etc.

> bus management (eg 2 drives can perform tasks at once unlike IDE), Hot

See http://www.coker.com.au/~russell/hardware/46g.png for a graph of
performance of an ATA disk on it's own, two ATA disks running on separate
busses, and two disks on the same bus.  From that graph I conclude that most
of the performance hit of running two such drives comes from the motherboard
bus performance not from an IDE cable.  That graph was done with an old
kernel (about 2.4.1), I'll have to re-do it with the latest results from the
latest kernel.

Anyway motherboards with 4 IDE buses on the motherboard are common now, most
servers don't have more than 4 drives.

I think we are talking about different ends of the spectrum. You are talking about low end systems with 4 drives. I'm talking about larger systems with 5 or more drives. As an example a 2 drive mirror array for the OS and a 3 drive RAID 5 array for data etc, or even a 0+1 array with 4 or 6 drives


> Swapable (This is HUGE) and more cache on the drive.

NO!  SCSI hard drives are no more swappable than ATA drives!  If you unplug
an active SCSI bus you run the same risks of hardware damage as you do for
ATA!
Hardware support for hot-swap is more commonly available for SCSI drives than
for ATA, but it is very pricey.

Actually Hotswap backplanes are not actually that much more expensive if you plan on it. If you are talking about a $20,000 server, the HS backplane only adds $300 to that. SCA drive are about the same price




Reply to: