RE: starting second md device at boot time
>I wonder if this is an order problem? It seems that your config file is set
>up properly for the first array, and I'd guess it's OK for the second
>array, especially if you can do "mdadm -As /dev/md1" (after a reboot) and
>it works. If you can, that leaves the only difference as the file system.
>Did you add the XFS module to the initial ram disk? It probably needs to
>be in there, because /share is probably mounted the same time that / is
>mounted, so you can't count on it loading the XFS driver until after it
>tries to mount the file systems.
I can start the array with mdadm -As without a problem. I have been looking through the dmesg log and can see that xfs fails to mount because the SB read failed. I'm guessing this means that it can't see it because the array didn;t start?
>If, on the other hand, you can't start the array without specifying the
>partitions involved, then see below. Your config file may have a problem.
>OK, but I wonder if your problem will go away if XFS is in the initrd.
I will try and create a new initrd including xfs (if i can remember how!)
. btw.. added the devices line to the mdadm.conf file, but it hasn't helped. i'm starting to wonder what effect the superblocks have on the device starting, but havn't been able to find out much more than the --zero-superblock command, but I do get this when I try to run it:
file-srvdeb:/home/rich# mdadm --zero-superblock /dev/md1
mdadm: /dev/md1 does not appear to have an MD superblock.
>Hmm, it seems that your UUIDs output by mdadm after the raid is running do
>not match the ones you posted from the config file snippet. That is
>definitely a problem. I think you might need to change the values in the
>config file to match the values in the actual array.
Sorry, this was my fault. The UUIDs do match, i think i may have re-created the array since then and sent you files from either side the of creation.