[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Hot swapping failed disk /dev/sda in RAID 1 array



In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1
and /dev/sdb1 the first drive /dev/sda has failed.  I have called
mdadm --fail and mdadm --remove on that drive and then pulled the
cables and removed the drive.  The RAID array continues to work fine
but in degraded mode.

I have some questions:

1. The block device nodes /dev/sda and /dev/sda1 still exist and the
   partitions are still listed in /proc/partitions.

   That causes I/O errors when running LVM tools or fdisk -l or other
   tools that try to access/scan all block devices.

   Shouldn't the device nodes and entries in /proc/partitions
   disappear when the drive is pulled?  Or does the BIOS or the SATA
   controller have to support this?

2. Can I hotplug the new drive and rebuild the RAID array?  Since
   removal of the old drive seems not to be detected I wonder if the
   new drive will be detected correctly.  Will the kernel continue
   with the old drive's size and partitioning, as is still found in
   /proc/partitions?  Will a call

        blockdev --rereadpt /dev/sda

   help?

3. Alternativley, I could reboot the system.  I have called

        grub-install /dev/sdb

   and hope this suffices to make the system bootable again.
   Would that be safer?

Any other suggestions?


urs


Reply to: