[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Squeeze assembles one RAID array, at boot but not the other



Hendrik Boom wrote:
> I have two RAID arrays on my Debian squeeze system.  The old one, which 
> still works, and has worked for years, is on a pair of partitions on two 
> 750GB disks.  THe new one is not recognized at boot.

Does at boot time mean at initrd initramfs time?

> boot is *not* on any of these RAIDs; my system boots properly. 

Good.

> The new one, whih I build today, resides on similar (but larger) 
> partitions on two 3TB disks.  I partitioned these drives today, using 
> gparted for gpt partitioning, then created a RAID1 from two 2.3GB 
> partitions o these disks, set up LVM2 on the RAID drive, created an LVM 
> partition, put an ext4 file system on it and filled it with lots of 
> data.  The partition definitely exists. 
> 
> But it is not recognized at boot.  The dmesg output tells me all about 
> finding the old RAID, but it doesn't even notice the new one, not even to 
> complain about it.

Did you update /etc/mdadm/mdadm.conf with the new data for the new
array that you just now built?

See the outout of --detail --scan and edit it into that file.

  mdadm --detail --scan

Did you rebuild the initrd images for the booting kernel after having
done this?

Example:

  dpkg-reconfigure linux-image-3.2.0-4-amd64

> Any ideas where to look?  Or how to work around the problem?

At one time in the past Debian (and some other distros too) would look
at the partition type and see 0xFD as an AUTORAID partition and
automatically mount it.  This was reported as a bug because if someone
were trying to recover a disk problem and attached a random disk to a
system then at boot time the init scripts would try to automatically
attach to it.  That was undesirable.

Due to that complaint the system was changed so that raid partitions
must be explicitly specified in the mdadm.conf file.  And since for
the root partition they must be mounted at early boot time this action
is pushed into the initrd to do so that if the root partition is on
raid it can be done early enough.

Also I know that RHEL/CentOS at least also moved from an autoraid of
0xFD to an explicitly mounted system too for the same reasons.  But
they do it by specifying the UUIDs on the kernel command line from
grub.  It makes for some very long command lines.  I like the Debian
choice better.

In summary:

In Debian after creating a new raid add the new raid info to
/etc/mdadm/mdadm.conf and then dpkg-reconfigure linux-image-$(uname -r).

In CentOS after creating a new raid edit the grub config and add the
new rd_MD_UUID values to the grub boot command line.  Or use rd_NO_DM.

Bob

Attachment: signature.asc
Description: Digital signature


Reply to: