[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#601782: installation report - beta1 installer fails with kernel error - drivers/md/md.c:6192, invalid opcode



On 10/29/2010 12:01 PM, Lennart Sorensen wrote:
> On Fri, Oct 29, 2010 at 11:40:19AM -0400, Clyde E. Kunkel wrote:
>> <Description of the install, in prose, and any thoughts, comments
>>       and ideas you had during the initial install.>
>>
>> hardware is asus p5e-wspro with intel bios raid enabled and 4 of 5 sata  
>> drives making up a raid 10 set which is partitioned with P1 an NTFS  
>> partition containing win 7 and P2 is a PV containing a VG which is used  
>> for LVs for various distros including fedora 14, ubuntu, and suse.
> 
> So you say you have enabled the intel bios raid, but your logs say you
> are using md raid.  I don't think both of those can be true.  dm raid
> might be able to work with the bios raid, but that isn't very stable
> nor supported by the installer (unless you turn on a special boot option).
> 
> I have one machine setup with intel bios raid and am going to reinstall
> it without it soon and use md raid instead.  The intel raid support
> in dm raid is so incomplete that if you have a degraded raid, it can't
> assemble the raid.  It can't rebuild, and it can't really do anything
> at all useful if any drive ever fails.  It may be handy when dual booting,
> but otherwise it is a complete nightmare.
> 

Hi, thanks for the response.

I am not using mdadm.  The installer provided it and I used it.  I
simply at tty-2 during the install attempt did a mdadm --detail /dev/md*
in order to provide as much info as possible.  The debian software
provided mdadm.  The results of mdadm --detail do not show a degraded
array.  Also, I notice that the partitioner correctly displays the md
and the vgs/lvs before the kernel bugchecks.

I do have ubuntu installed on the same configuration and it uses dmraid,
AFAICT.  I also have suse and fedora installed and they both use mdadm
and I have not seen any raid degradation.

I am also aware that consensus seems to be to not use bios raid since
you can achieve the same with mdadm.  In fact, on my other test machine,
that is the way I have configured a raid 10 set serving as a pv for
multiple LV installations.  That said, since there are many mobos with
bios raid, I think it prudent to test that configuration to ensure the
software is robust.

If mdraid is what debian is using, perhaps a switch to mdadm would be
prudent.

Regards,
OldFart



Reply to: