[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

mdadm error - superfluous RAID member



Hi,

I'm trying to re-use an older server, installing squeeze (6.0.5). I'm using software RAID and LVM on the machine (details below). But I must be doing something wrong with the disk set up stage in the installer, as when it boots I see an error flash up quickly:

 error: superfluous RAID member (5 found)

It appears that the initramfs then gets loaded, the RAID detection fails and it then looks for the LVM volume group, which it can't find (as the LVM group exists on the RAID device). I see this output:

 Loading, please wait...
 mdadm: No devices listed in conf file were found.
  Volume group "vgbiff" not found
  Skipping volume group vgbiff
  Unable to find LVM volume vgbiff/lvroot
  <same messages appear but for lvswap>
 Gave up waiting for root device <snip>
...

It then drops me into the BusyBox shell, with initramfs prompt.

I can then activate the RAID simply by doing

 (initramfs) mdadm --assemble --scan
 mdadm: /dev/md/0 has been started with 5 drives and 1 spare.

and then activate the volume group, using:

  (initramfs) vgchange -a y
  2 logical volume(s) in volume group "vgbiff" now active

Exiting the busybox shell then boots the system.

The basic configuration is:
- Xeon (64-bit capable) w/4GB RAM
- PCI SCSI controller
- 6 x 73GB SCSI drives

During install, on each drive I created a 500MB primary partition (with /dev/sda1 being for /boot) and then a second partition for Linux s/w RAID (label set to fd).

In /dev/md0 I then created a LVM partition, and set up the volume group to contain two volumes - one for swap, and one for /. /dev/md0 is comprised of 5 drives running in RAID5, with one hot spare.

During installation, I took pains to wipe all the drives and create all partitions anew.

When booted, I checked /etc/default/mdadm. The values INITRDSTART='all' and AUTOSTART=true are both set. I also set VERBOSE=true to give me more output when creating a new initramfs. I checked the contents of /etc/mdadm/mdadm.conf - which seems fine.

I then issued "update-initramfs -vu", and saw the following:

 I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
 I: mdadm: will start all available MD arrays from the initial ramdisk.
 I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.

and the last output before cpio builds the initial ramdisk is

 Calling hook dmsetup

- so, in my limited knowledge, this suggests the drive mapper is incorporated into the initramfs also.

When I take a peek into /boot/grub/grub.cfg I see:

 insmod raid
 insmod raid5rec
 insmod mdraid
 insmod lvm

in the 00_header section.


I'm running low on ideas now. Re-installing grub doesn't help. Running "update-grub" simply dumps out many more of those error messages:

 error: superfluous RAID member (5 found).
 <repeats 17 times>

So it does point to grub being at fault somewhere, rather than the initrd.

Have I missed something blindingly obvious?


Thanks again,
Steve

--
Steve Dowe

Warp Universal Limited
http://warp2.me/sd


Reply to: