[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RAID 1 problem after removing disk



Hi,

I have an odd problem with my RAID 1 (/dev/md2) setup on Debian (sid). It used to be a 2 HDD configuration:
/dev/sdc1, /dev/sdd1
Recently sdc1 started warning me with SMART errors, so I decided to replace it with a new drive (/dev/sde1). Perhaps foolishly, I used gnome-disk-utility (palimpsest) to do that. I selected the RAID array, added the new drive and it synchronised. While synchronising, the sdc drive failed. After finished syncing, the RAID status displayed that 2 drives are fully synchronised (sdd1, sde1), one failed (sdc1) and the array is degraded. I removed the failed disk from the array.

Next day after I started my PC i was surprised - the RAID did not start, and was still marked as degraded. I did few checks and it seems that mdadm 'thinks' there should be 3 drives... I can run mdadm --assemble and it starts the array (but as degraded). How can I get rid of the removed drive? I tried
mdadm /dev/md2 --remove failed
mdadm /dev/md2 --remove detached
they do nothing

mdadm --zero-superblock /dev/sdc1
says it couldn't open /dev/sdc1 for write (well, it could not as the disk has died)

===================

Result from
mdadm --detail /dev/md2:

       Version : 0.90
  Creation Time : Wed Jun 17 21:11:25 2009
     Raid Level : raid1
     Array Size : 966799616 (922.01 GiB 990.00 GB)
  Used Dev Size : 966799616 (922.01 GiB 990.00 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Aug 14 22:23:03 2011
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 40b55130:8f1de1e6:9d4deba6:47ca997f
         Events : 0.50097

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        2      active sync   /dev/sde1

===================

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md2 : active (auto-read-only) raid1 sdd1[1] sde1[2]
      966799616 blocks [3/2] [_UU]

md1 : active raid1 sda2[0] sdb2[1]
      966799616 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      497856 blocks [2/2] [UU]

unused devices: <none>

===================

from /etc/mdadm/mdadm.conf:

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=54f1d14e:91ed3696:c3213124:8831be97 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5d97a1e5:26d9d2ed:2a031ed3:45563b24 ARRAY /dev/md2 level=raid1 num-devices=2 UUID=40b55130:8f1de1e6:9d4deba6:47ca997f


How can I get rid of the removed drive from RAID and get it fixed? I'd be grateful for suggestions.

Kind regards,
Michal
--
Michal R. Hoffmann


Reply to: