RAID, mdadm shift dev point
Hi, I have a problem with unexpected raid behavior. On my machine I have
configured two raid5 (raid software, mdadm) over 5 disks + 1 spare disk.
md0: sda1 sdb1 sdc1 sdd1 sde1 [UUUUU] (spare: sdf1)
md1: sda2 sdb2 sdc2 sdd2 sde2 [UUUUU] (spare: sdf2)
Now. I have set fails sdb2 on md1
~$ mdadm --fail /dev/md1 /dev/sdb2
md1: sda2 sdc2 sdd2 sde2 [U_UUU] (spare: sdf2)
and hot added sdf2 on same disk array. Array has been rebuild including sdf2
~$ mdadm --add /dev/md1 /dev/sdf2
md1: sda2 sdc2 sdd2 sde2 sdf2 [UUUUU] (spare: none)
Ok, it works well.
Now, for emulate disaster scenario I have halted machine and phisically remove
/dev/sdb. System booted well but /dev point has been shifted by one position,
or, in other words:
sda -now-is-> sda
sdb -now-is-> sdc
sdc -now-is-> sdd
sdd -now-is-> sde
sdf -now-is-> sde
while sdf is not recognized by the system. Why system reallocate /dev point in
this way? It's a disaster for daily maintenance. Now my /proc/mdstat said:
md0: sda1 sdb1 sdc1 sdd1 sde1 [UUUUU] (spare: none)
md1: sda2 sdb2 sdc2 sdd2 sde2 [UUUUU] (spare: none)
/dev/sdf do not exist, but it is phisically in my machine.
/dev/sdb found into raid arrays, but it is phisically on my desk!
Please, help me to understand mdadm logic.
Openclose.it - Idee per il software libero