[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

rebooting, non-root raid, udev



Hello,

How is the boot process of RAID (using a Debian supplied kernel that
doesn't have RAID autodetect compiled in) meant to work with udev?

What seems to happen (at least with recent testing/sarge based
system):

1. initrd script starts RAID for swap and /. No problems here. The
initrd script does the right thing (after I renamed devfs names to
non-devfs names in /etc/raidtab that is).

2. kernel boots and transfers control to userland.

3. udev, I believe (not checked) starts early in the boot process.  It
   creates /dev/md entries for the activated RAID partitions (swap and
   /), but nothing else (since the RAID hasn't been initialised for
   these devices yet).

4. /etc/init.d/raid2 attempts to initialise the other RAID partitions
   but fails to do so because the /dev/md* entries do not exist.

As I hacked solution, I have added to the very start of my
/etc/init.d/raid2 (not tested yet in automatic boot, but worked when I
run it manually):

for i in 12 14 20 30 32
do
        mknod /dev/md/$i b 9 $i
        ln -s md/$i /dev/md$i
done

I think this will solve the problem.

Now I know what happens in practise, what is meant to happen in
theory?

I haven't observed anything like this behaviour with devfs, so I
suspect this is udev specific.
-- 
Brian May <bam@debian.org>



Reply to: