Joey L wrote :
> I think i did everything and this is a standard configuration - I did
> not do anything too crazy !
> Again - all this is software raid --- the /boot is raid1 and the other
> volumes are software raid5 The Raid5 has LVM filesystems.
>
> here is fdisk -l :
>
> thor:/home/mjh# fdisk -l
>
> Disk /dev/sda: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 1 36 289138+ fd Linux raid
> autodetect
> /dev/sda2 37 60801 488094862+ fd Linux raid
> autodetect
>
[...snip long partitions list]
>
>
> On the mdadm --detail --scan --verbose :
>
>
> thor:/home/mjh# mdadm --detail --scan --verbose
> ARRAY /dev/md0 level=raid1 num-devices=2 spares=3
> UUID=8a435040:c6f27178:02026e74:21deb7ac
> devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
> ARRAY /dev/md1 level=raid5 num-devices=4 spares=1
> UUID=f60a4f26:891a29c2:8dbe0712:bd7a69ac
> devices=/dev/sda2,/dev/sdb2,/dev/sdc2,/dev/sdd2,/dev/sde2
>
>
Ok, so all partitions are of "fd" type, and you sure have plenty of
spares on the first array !
I have to say that even not overtired ;-) I am short of ideas about what
could be going wrong here.
You could check that md-raid* are compiled as modules (m) in the kernel
egrep -i '(raid|_md_)' /boot/config-2.6.29
Check for the initramfs scripts:
ls /usr/share/initramfs-tools/scripts/local-top/
you should see at least "mdadm" and "lvm2"
Outside of this I don't know. If it boots fine with the old kernel then
superblocks are fine on the arrays members, it must be a difference in
kernel config or initrd.
Sorry, I can't think of anything else right now, can't you spot anything
else in dmesg a bit more verbose than "failed to assemble all arrays" ?
A controller initialization problem ?