[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Install and RAID



                                                                                                                    
                    Nathan E                                                                                        
                    Norman               To:     debian-devel@lists.debian.org                                      
                    <nnorman@micr        cc:     (bcc: Vince Mulhollon/Brookfield/Norlight)                         
                    omuse.com>           Fax to:                                                                    
                                         Subject:     Re: Install and RAID                                          
                    01/26/2001                                                                                      
                    11:09 AM                                                                                        
                                                                                                                    
                                                                                                                    








On Fri, Jan 26, 2001 at 10:59:38AM -0600, Vince Mulhollon wrote:
>> Software RAID:
>> Controllers are available everywhere.  If you have the good sense to use
>> IDE, you can "borrow" a controller card from any workstation.  Also
"all"
>> IDE controllers are compatible with each other.  Sure there are
>> enhancements such that some are faster or whatever, but all of them will
at
>> least work together.
>
>Yeah right, I'm going to build a high performance server on IDE
><snicker>

Well, the concept is the same.  If you use "workstation" hardware in the
server, then you've got dozens of "spares" only a few feet away.  Some
people use SCSI based workstations.  The point is you need spares, and
commodity spares are cheap and universally available, whereas RAID spares
are not cheap and not as available.

Simply buying a HW RAID card moves your single point of failure from an
easily replaceable standard commodity hard drive to a custom proprietary
hard to replace controller card.  To each their own, but I prefer the idea
that I can replace a burned out drive Sunday night at best buy, vs slightly
better performance.

Regarding the performance issue, it doesn't matter.  Hardware keeps getting
so much faster that any "normal user" will never notice if you use hardware
that is half a year out of date, other than it being cheaper and more
reliable than the cutting edge stuff.  Maybe your drinking buddies will
make fun of you if your hard drive seek time is 2 milliseconds slower than
their expensive new drives, but in the long run it won't matter anyway
because in six months you'll be able to buy something twice as fast as
either drive for $100 at Radio Shack...

>> Hardware RAID:
>> Controllers made by small companies, not stocked in your state.  If the
>> controller blows you're probably down until the post office delivers.
>> Even better, I've heard stories of incompatible controllers.  So if a
ABC
>> brand controller fries,  and you install a XYZ brand hardware RAID
>> controller, you get to repartition, restore your backups, and start
over.

>IBM is a really small company, and really hard to buy stuff from.

OK there are big companies now selling RAID solutions, I admit my error.

However, it IS really hard to buy an IBM RAID controller compared to either
borrowing a nearby workstation's controller or going to one of the hundreds
of local retail establishments and buying a plain controller,or just firing
up the backup server.

>> If you have the cash to keep spare HW RAID controllers onsite, then
you've
>> probably got the cash to setup duplicate servers.  If you have duplicate
>> servers, you don't need RAID because you already have overall system
layer
>> redundancy, so you don't need RAID.  A solution in search of a problem.

>This does not follow.  If I've got the money to keep a $1000 raid card
>as a spare I've got the money to keep a $5000 server on the network
>(which I may or may not own; more expenses), and I've got to keep the
>data synced?  Yi.

>I'd be more convinced if you'd talked about using _two_ hardware raid
>controllers, and running software raid 1 over each array ...

The point I'm making is that the true cost of hardware RAID is lack of
spares, which may not balance out the costs of a single point of failure.

I agree that your example of two hardware raid controllers with software on
top is better than nothing.  I just don't think its worth the money.  Using
your example of $5K servers and $1K RAID controllers, a hybrid HW/SW RAID
costs $7K, whereas duplicate non RAID servers only costs $3K more.  You get
alot more redunancy for a mere $3K with two servers.  Its not worth 20% the
cost of the server to gain maybe 0.1% higher reliability, when for only
100% the cost of the server, you'd go to "100%" reliability.

If the enduser is too cheap to spring for "100%" reliability, they will
probably be too cheap to spring for the RAID controller, anyway.

Sure RAID used to be a good solution for $50K proprietary UNIX boxes, then
$1K for a RAID controller is a good rate of return for the "reward".  But
on any machine $5K and down, it just doesn't pay off, especially if its a
business critical server or a you have to pay for repair tech time.  The
way the price/performance curve is going for hardware, there's not as much
application for $50K servers.  Sure, there will always be "some" $1M IBM
mainframes, I'm talking about the majority of servers.

RAID also used to be a good idea in the "bad old days" when HDs were less
reliable.  The hardware just don't seem to crash as much as it did ten
years ago.

Finally for almost all end users, they are vastly happier running slow
rather than not running at all.  That's the coolest part of the scalability
of Linux.  Sure our $10K mailserver is nice and fast, but if it croaks, the
users will be OK for a few hours with a backup server that is alot slower.
If you keep the old server hardware from the last upgrade, then the backup
server is essentially free, except for the addition of hard drive space,
and the cost of drives is imploding, so that's not much.

Our mail filter/gateway which was a pretty high end and expensive Pentium
had a controller failure, and I came to work in the middle of the night and
"hot swapped it" with an old 386 for a few days.  The users were happy it
was up.  That's all that matters.

If you have $1K to spend to improve reliability, I'm sure that buying a
used slow $1K backup server will result in an overall departmental system
with far less overall downtime than buying a $1K RAID card.  Adding another
hard to replace single point of failure (HW RAID card) is not going to help
as much as adding a complete hotswapable backup system.  The ultimate is
when you spring for those $100 IDE hard drive cartridge systems, then when
the primary server's CPU or power supply or CPU fan or IDE controller or
network fails or whatever, you just shut it down, yank the drive out of the
primary, stick it in the secondary, and power up and in about 90 seconds
you're back on the air.  But still keep using backup scripts to
continuously backup the primary to the secondary, just in case the primary
HD fries.  A good solution is daily tape backups combined with incremental
10 minutes backups from the primary to the secondary.  That works well for
"non-critical" servers, but for the important stuff you do need to keep
both in sync at all times.  One method for that for mostly static data
servers is to use a third machine as the "parent" where you make all
changes and then the parent floods the changes to the production children.
Anyway, there's a huge number of other ways to "parallel process" a server
system.

>> Now there might be are other reasons for RAID, but hardware raid is a
>> reliabilty loss not gain.

>I'm still not seeing how you arrive at this conclusion.  I suppose
>we'll have to agree to disagree.

Yes, I think we have different goals.  My only goal is speed and ease of
repair above all else, and its hard to top commodity IDE controllers and
hotswapable backup servers.  My goal is not to rely on the post office as
part of the business plan.

My conclusion comes from the possibility that hardware RAID5 is a good idea
if you need perhaps 20 times the storage of the largest commodity grade
hard drive.  Its "hard" to get 5 IDE controllers installed in one machine,
so hardware would be the only way to go.  Of course in my opinion the
"proper" solution to that situation would be to split the 20 hard drives
amongst "many" servers, so one bad CPU fan or AC power cord can't kill the
whole works, but whatever.

>> The only attempts to explain why HW RAID is better revolve around
nonsense
>> like "its not important unless you spend extra money" or something.

>Uh huh ... you run a lot of raid 5?

No.  Just lots of backup servers.

I should clarify that the "nonsense" I was complaining about was the type
of claims where the entire post is "HW RAID is always the only reasonable
choice" etc.  The difference between that kind of advocacy of HW solutions
and my advocacy of SW solutions is that I've backed up my accusations with
(some) numbers and plenty of good examples and explained the reasoning
behind it.  There are times when hardware RAID doesn't make sense, and I
think that's most of the time.

I'm open minded that there may be good applications for a hardware RAID
controller, but not many of those situations exist.  Software RAID doesn't
have many more applications, although I'm convinced there are more than HW.

--
Nathan Norman - Staff Engineer | A good plan today is better
Micromuse Inc.                 | than a perfect plan tomorrow.
mailto:nnorman@micromuse.com   |   -- Patton
(See attached file: att5tr8r.dat)

Attachment: att5tr8r.dat
Description: Binary data


Reply to: