RE: RAID & Hard disk performance
DOA should be a non issue from a reputable supplier. I know we test all our
drives before shipping any of our machines. A few things you're forgetting
is that traditionally SCSI drivers run 24x7 until they fail. IDE drives run
for 8 hours a day, 5 days a week. Also there are a lot of lower end servers
out there with insufficient cooling, and hard drives are probably the first
thing this will significantly damage.
At 12:46 PM 11/6/01 -0700, you wrote:
That is kind of funny, in my experience I have found that SCSI drives have a
much higher death rate than IDE drives, by far.
I just finished a project of installing 50+ servers, some with RAID
configurations, some without, all using SCSI drives. Five were dead upon
arrival and will need to be exchanged with the vendor. Two more died a
short time after installation. I expect more deaths, which is why critical
systems are using RAID. This mirrors my other experiences with SCSI as
well. The drives just seem to die more often -- not in huge numbers, just a
few at a time.
A few months back on another project we bought about 30 IBM IDE drives for
office members, taking them off of low capacity SCSI drives. All are okay,
no deaths, no loss of data after about a year. This also mirrors my
previous experiences with IDE drives. They seem to be more rugged. Western
Digital, and older Maxtor make up the majority of my IDE death experiences.
My only reasoning for this is the higher spindle speeds and the push for
speed on SCSI drives and the lower quantities produced versus IDE.
That might go against logic, but it is what I have experienced.
# Jesse Molina lanner, Snow
# Network Engineer Maximum Charisma Studios Inc.
# firstname.lastname@example.org 1.303.432.0286
# end of sig
> -----Original Message-----
> From: Dave Watkins [mailto:email@example.com]
> Sent: Monday, November 05, 2001 11:27 PM
> To: firstname.lastname@example.org
> Subject: Re: RAID & Hard disk performance
> Not to start a holy war, but there are real reasons to use SCSI.
> The big ones are
> Much larger MTBF, faster access times due to higher spindle
> speeds, better
> bus management (eg 2 drives can perform tasks at once unlike
> IDE), Hot
> Swapable (This is HUGE) and more cache on the drive.
> I'll stop now before I start that war :-)
> At 11:20 AM 11/4/01 +1100, you wrote:
> ><quote who="Russell Coker">
> > > There's a number of guides that tell you about hdparm and
> what DMA is,
> > but if
> > > you already know that stuff then there's little good
> >"Oh bum." :)
> > > Then on the rare occasions that I do meet people who know
> this stuff
> > > reasonably well they seem to spend all their time trying
> to convince me
> > that
> > > SCSI is better than IDE (regardless of benchmark results). :(
> >Heh, there's a religious war waiting to happen.
> > > >  http://people.redhat.com/alikins/system_tuning.html
> >I've just found that iostat (in unstable's sysstat package) supports
> >extended I/O properties in /proc if you have sct's I/O
> monitoring patches.
> >Unfortunately, the last one on his ftp site is for
> 2.3.99-preBlah. I sent an
> >email to lkml last night to see if there's a newer patch -
> I'll follow up
> >here if so.
> >Thanks Russell,
> >- Jeff
> > Wars end, love lasts.
> >To UNSUBSCRIBE, email to email@example.com
> >with a subject of "unsubscribe". Trouble? Contact
> To UNSUBSCRIBE, email to firstname.lastname@example.org
> with a subject of "unsubscribe". Trouble? Contact
To UNSUBSCRIBE, email to email@example.com
with a subject of "unsubscribe". Trouble? Contact firstname.lastname@example.org