Joey L wrote :
> I think i did everything and this is a standard configuration - I did
> not do anything too crazy !
> Again - all this is software raid --- the /boot is raid1 and the other
> volumes are software raid5 The Raid5 has LVM filesystems.
> here is fdisk -l :
> thor:/home/mjh# fdisk -l[...snip long partitions list]
> Disk /dev/sda: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Device Boot Start End Blocks Id System
> /dev/sda1 * 1 36 289138+ fd Linux raid
> /dev/sda2 37 60801 488094862+ fd Linux raid
> On the mdadm --detail --scan --verbose :
> thor:/home/mjh# mdadm --detail --scan --verboseOk, so all partitions are of "fd" type, and you sure have plenty of
> ARRAY /dev/md0 level=raid1 num-devices=2 spares=3
> ARRAY /dev/md1 level=raid5 num-devices=4 spares=1
spares on the first array !
I have to say that even not overtired ;-) I am short of ideas about what
could be going wrong here.
You could check that md-raid* are compiled as modules (m) in the kernel
egrep -i '(raid|_md_)' /boot/config-2.6.29
Check for the initramfs scripts:
you should see at least "mdadm" and "lvm2"
Outside of this I don't know. If it boots fine with the old kernel then
superblocks are fine on the arrays members, it must be a difference in
kernel config or initrd.
Sorry, I can't think of anything else right now, can't you spot anything
else in dmesg a bit more verbose than "failed to assemble all arrays" ?
A controller initialization problem ?