[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Replacing failed drive in software RAID




On 11/1/2013 12:23 PM, Pascal Hambourg wrote:
> Stan Hoeppner a écrit :
>> On 11/1/2013 9:19 AM, Pascal Hambourg wrote:
>>> Stan Hoeppner a écrit :
>>>> This is precisely why I use hardware RAID HBAs for boot disks (and most
>>>> often for data disks as well).  The HBA's BIOS makes booting transparent
>>>> after drive failure.  In addition you only have one array (hardware)
>>>> instead of 3 (mdraid).
>>>
>>> MD RAID arrays can be partitionned, or contain multiple LVM logical
>>> volumes. So you don't have to create multiple arrays, unless they are of
>>> different types (e.g. RAID 1 and RAID 10 as in this thread).
>>
>> Yes, I'm well aware of md's capabilities.  I was speaking directly to
>> the OP's situation.
> 
> So was I. In the OP's situation, there are arrays of different types (1
> and 10) so you cannot have one array even with hardware RAID.

Of course he can, and it is preferable to use a single array.  The only
reason the OP has a separate RAID1 is the fact that it is much simpler
to implement boot disk failover with md/RAID1 than with md/RAID10.  And
in fact that is why pretty much everyone who uses only md/RAID has at
least two md arrays on their disks:  a RAID1 set for dual MBR, /boot,
and rootfs, and a separate RAID5/6/10/etc for data.

With hardware based RAID there are no such limitations.  You can create
one array and throw everything on it.  No manually writing an MBR to
multiple disks, none of md's PITA requirements.  Zero downside, lots 'o
upside.

-- 
Stan


Reply to: