[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Problem with Raid Array persistence across reboots.



Chandler, Alan wrote:
>  
> I created a raid array with mdadm, thus
>  
> mdadm --create=/dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]4
>  
> and then turned /dev/md0 into a LVM physical volume, volume group and
> some logical volumes.
>  
> This worked great until I rebooted, at which point the start-up scripts
> failed to recreate the raid array, and I got into tricky problems with
> duplicate LVM PVs with the same UUID. [and ironically, since I used raid
> to avoid it, some data loss - although fortunately I DO have backups]
>  
> Two questions
>  
> 1) In the Debian world, how do you make raid arrays persistent across
> reboots?
>  
> [It appears that Debian does not use raidtools and /etc/raidtab as the
> linux raid howto says)
>  
> 2) If I do manage to create the array, what stops vgscan during LVM
> startup from picking up 3 physical volumes (/dev/md0, /dev/sda4 and
> /dev/sdb4) with the same UUID and only find /dev/md0?

Greetings Alan:

You don't mention which version of Debian you're using - the raid tools
have varied a lot between the last few releases - but assuming Sarge:

The configuration is stored in /etc/mdadm/mdadm.conf and looks like:

DEVICE partitions
ARRAY /dev/md3 level=raid1 num-devices=2
UUID=62297b2a:13d5cbb3:b889437a:6095a0d0
   devices=/dev/sda6,/dev/sdb6
ARRAY /dev/md2 level=raid1 num-devices=2
UUID=27192fb9:88d9191f:bbbf6c42:8656233f
   devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=b483771d:b60355eb:afe973c0:92db52e2
   devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=aa772038:8b2a1989:339b0f8f:f93b96b5
   devices=/dev/sda1,/dev/sdb1

by default (my arrays were created by the installation program, not by
hand).  Note: the above shows wrapped in my mail window.  ARRAY starts a
line, and the UUID finishes the line.  The devices line is indented
below the ARRAY line.

To answer your questions:

1) If you have a configuration file in /etc/mdadm, it should be mostly
automagic, provided that all of the required drivers are either built
into your kernel or in your initrd.  Even without the configuration
file, mdadm will usually do a pretty good job by reading the
superblocks.  Biggest problem here is not having the drivers loaded
early enough.

2) More magic.  LVM2 looks at the superblocks and ignores the MD
components, by default.  That is configurable behavior in the
/etc/lvm/lvm.conf file with the md_component_detection directive.

Congratulations on the backup.  We've had arrays go sideways during
upgrades, but we've always managed to put them back together without too
much damage.

Good Luck.

-Scott



Reply to: