[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Question about HotPlug and cciss hp storage



On 2/23/2012 10:16 AM, Julien Groselle wrote:

> Now i'm sure that is a must have.
> Since 4 years to last year, we just had Hardware RAID, so we didn't need to
> do any actions on HDD...
> Now with md RAID we need ! :)

RAID 0 arrays are not fault tolerant, so there is nothing the controller
can do when a single drive configured as such fails.  RAID 1 mirrors,
however, are fault tolerant.

Thus, the proper way to do what you are attempting to do, with
proprietary RAID cards, is to use hybrid nested hardware/mdraid arrays.
 For example, if you want a straight mdraid 10 array but you still want
the RAID card to handle drive fail/swap/rebuild automatically as it did
in the past, you would create multiple RAID 1 mirrors in the controller
and set the rebuild policies as you normally would.  Then you create an
mdraid 0 stripe over the virtual drives exported by the controller,
giving you a hybrid soft/hardware RAID 10.

You likely won't see much performance gain with this setup vs. using a
single RAID card with hardware RAID 10.  The advantage of this setup
really kicks in when you create the mdraid 0 stripe across many RAID 1
mirrors residing on 2 or more hardware RAID controllers.  The 3 main
benefits of this are:

1.  Striping can occur across many more spindles than can be achieved
    with a single RAID card
2.  You keep the hardware write cache benefit
3.  Drive failure/replace/rebuild is handled transparently

Obviously it's not feasible to do parity RAID schemes in such a hybrid
setup.  If your primary goal of switching to mdraid was to increase the
performance of RAID6, then you simply can't do it with a single RAID
card *and* still have automatic drive failure management.  As they say,
there's no such thing as a free lunch.

If RAID6 performance is what you're after, and you want mdraid to be
able to handle the drive failure/replacement automatically without the
HBA getting in the way, then you will need to switch to non-RAID HBAs
that present drives in JBOD/standalone fashion to Linux.  LSI makes many
cards suitable for this task.  Adaptec has a few as well.  They are
relatively inexpensive, $200-300 USD, models with both internal SFF8087
and external SFF8088 ports are available.  Give me the specs on your
Proliant, how many drives you're connecting, internal/external, and I'll
shoot you a list of SAS/SATA HBAs that will work the way you want.

> But i have another problem, hpacucli don't work with all kernel version !
> To avoid details, i show you my results :
> 
> 2.6.32.5-amd64 : OK
> 2.6.38-bpo.2-amd64 : NOK
> 2.6.39_bpo.2-amd64 : NOK
> 3.2.0.0.bpo.1-amd64 : NOK

This is very common with proprietary vendor software.  They have so many
distros to support that they must limit their development and
maintenance efforts to a very limited number of configurations, and
kernel versions.  When you look at RHEL kernels for instance, they never
change major numbers during a release lifecycle.  So you end up with
things like 2.6.18-274.18.1.el5.  This is what is called a "long term
stable kernel".  Thus, when a vendor qualifies something like a RAID
card driver or management tools, for RHEL 5, they don't have to worry
about their software breaking as Red Hat updates this kernel over the
life of the release, with things like security patches etc.  This is the
main reason why RHEL and SLES are so popular in the enterprise
space--everything 'just works' when vendor BCPs are followed.

To achieve the same level of functionality with Debian, you must stick
with the baseline kernel, 2.6.32-5 and security updates only.

Welcome to the world of "enterprise" hardware.

-- 
Stan


Reply to: