Le 30/11/2013 06:39, Stan Hoeppner a écrit :
> On 11/29/2013 4:43 PM, François Patte wrote:
>> Bonsoir,
>>
>> I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
>> raid1 arrays.
>>
>> One disk (sdc) failed and I replaced it by a new one. Copying the
>> partition table from sdd disk using sfdisk:
>>
>> sfdisk -d /dev/sdd | sfdisk /dev/sdc
>>
>> then I "added" the 2 partitions (sdc1 and sdc3) to the arrays md0 and md1:
>>
>> mdadm --add /dev/md0 /dev/sdc1
>>
>> mdadm --add /dev/md1 /dev/sdc3
>>
>> There were no problem with the md0 array:
>>
>>
>> cat /proc/mdstat gives:
>>
>> md0 : active raid1 sdc1[1] sdd1[0]
>> 1052160 blocks [2/2] [UU]
>>
>>
>> But for the md1 array, I get:
>>
>> md1 : active raid1 sdc3[2](S) sdd3[0]
>> 483138688 blocks [2/1] [U_]
>>
>>
>> And mdadm --detail /dev/md1 returns:
>>
>> /dev/md1:
>> Version : 0.90
>> Creation Time : Sat Mar 7 11:48:30 2009
>> Raid Level : raid1
>> Array Size : 483138688 (460.76 GiB 494.73 GB)
>> Used Dev Size : 483138688 (460.76 GiB 494.73 GB)
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 1
>> Persistence : Superblock is persistent
>>
>> Update Time : Fri Nov 29 21:23:25 2013
>> State : clean, degraded
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 0
>> Spare Devices : 1
>>
>> UUID : 2e8294de:9b0d8d96:680a5413:2aac5c13
>> Events : 0.72076
>>
>> Number Major Minor RaidDevice State
>> 0 8 51 0 active sync /dev/sdd3
>> 2 0 0 2 removed
>>
>> 2 8 35 - spare /dev/sdc3
>>
>> While mdadm --examine /dev/sdc3 returns:
>>
>> /dev/sdc3:
>> Magic : a92b4efc
>> Version : 0.90.00
>> UUID : 2e8294de:9b0d8d96:680a5413:2aac5c13
>> Creation Time : Sat Mar 7 11:48:30 2009
>> Raid Level : raid1
>> Used Dev Size : 483138688 (460.76 GiB 494.73 GB)
>>
>>
>> Array Size : 483138688 (460.76 GiB 494.73 GB)
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 1
>>
>> Update Time : Fri Nov 29 23:03:41 2013
>> State : clean
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 1
>> Spare Devices : 1
>> Checksum : be8bd27f - correct
>> Events : 72078
>>
>>
>> Number Major Minor RaidDevice State
>> this 2 8 35 2 spare /dev/sdc3
>>
>> 0 0 8 51 0 active sync /dev/sdd3
>> 1 1 0 0 1 faulty removed
>> 2 2 8 35 2 spare /dev/sdc3
>>
>>
>> What is the problem? And how can I recover a correct md1 array?
>
> IIRC Linux md rebuilds multiple degraded arrays sequentially, not in
> parallel. This is due to system performance impact and other reasons.
> When the rebuild of md0 is finished, the rebuild of md1/sdc3 should
> start automatically. If this did not occur please let us know and we'll
> go from there.
I thought it was clear enough that the result of commands mdadm
--details or mdadm --examine were what they return *after* the rebuild
of array md1.
On reboot, I am warned that md1 is started with one disk out of two and
one spare and recovery starts immediately:
md1 : active raid1 sdd3[0] sdc3[2]
483138688 blocks [2/1] [U_]
[=>...................] recovery = 7.5% (36521408/483138688)
finish=89.1min speed=83445K/sec
After that, the situation is what is quoted in my previous message....
Regards.
--
François Patte
UFR de mathématiques et informatique
Laboratoire CNRS MAP5, UMR 8145
Université Paris Descartes
45, rue des Saints Pères
F-75270 Paris Cedex 06
Tél. +33 (0)1 8394 5849
http://www.math-info.univ-paris5.fr/~patte
Attachment:
signature.asc
Description: OpenPGP digital signature