[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#477356: mdcfg: errors if a pre-existing RAID device is inactive



Package: mdcfg
Version: 1.24

During an installation test I was working in a virtual machine with two 
disks. The second harddisk still had two partitions previously used for 
RAID tests while the first hard disk had been reused in the mean time.

For the second hard disk one of the partitions was part of a RAID1, the 
other of RAID0. The first was correctly started (degraded), but the second 
obviously failed to start.

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : inactive hdd2[1]
      xxxxx blocks

md0 : active raid1 hdd1[1]
      yyyyy blocks [2/1] [_U]

unused devices: <none>

This resulted in a broken dialog when selecting to delete a RAID device:
     Multidisk device to be deleted:
              md1
              :
              inactive
              hdd2[1]
              md0_raid1
              Cancel

The first 4 lines are clearly due to incorrect parsing from /proc/mdstat.

After selecting the first line (md1), the next confirmation dialog did look 
OK, but the actual deletion failed partially and resulted in an error 
dialog. From the syslog:
mdcfg: Removing /dev/md/1 (Started /dev/hdd2)
kernel: md: md1 stopped.
kernel: md: unbind<hdd2>
kernel: md: export_rdev(hdd2)
mdcfg: mdadm: stopped /dev/md/1
mdcfg: mdadm: Couldn't open Started for write - not zeroing

Again this looks like a parse error.

Attachment: signature.asc
Description: This is a digitally signed message part.


Reply to: