[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: booting from raid1 fails after upgrading from kernel-3,2 to 3,10-3 amd64



e.waelde wrote:

> Hello,
> 
> my main workstation runs its root-filesystem on lvm
> on crypt_luks on raid1 (software raid). Everything
> works flawless with kernel-image-3.2.0-4-amd64
> 
> However, all later kernels that I tried fail to boot,
> e.g. kernel-image-3.10-3-amd64
> + grub2 starts the kernel
> + the kernel starts quering all hw.
>   It somehow fails to assemble the raid, it seems. At least
>   I do not see any raid related messages (even after adding "verbose
>   debug" to the kernel argument list.
> 
> I inspected the contents of the initial ramdisks (3.2 and 3.10) and did
> not find anything sufficiently different in conf, etc, scripts.
> 
> Can anyone confirm this setup is working on amd64 with kernel 3.10 say?
> 
> Any pointers, on how to better debug this? I once managed to get a shell
> in the initramfs stage. I could load raid1, assemble the raid manually,
> luksOpen the crypted partition, start lvm ... but the I did something
> which locked the system (unfortunately I cannot remember, how I got
> there). Unfortunately all further attempts to drop into a shell in
> initramfs have failed on me.
> 
> FWIW: I was able to reproduce this problem by installing wheezy on two
> empty disks, then upgrade to unstable and trying to boot the newer kernel.
> So I suspect I missed something during the upgrade ...
> 
> cpu: AMD Phenom(tm) 9550 Quad-Core Processor
> disks: connected through SATA
> OS: Debian unstable
> 
> 
> Any ideas on how to proceed?
> 
> Erich
> 
> 

Hi no one answered for a while so I will try to help

I had similar issues but lets get a common ground

I assume you have a plain boot partition and encrypted lvm on raid where
your other partitions reside. This would be the recommended setup.

So after installing a new kernel (and initramfs file) there are few cases
where it can go wrong.

1. the initramfs is not recreated or not recreated properly.
-> check for this your /etc/initramfs-tools config files
-> make sure the modules (md etc) are included in the initramfs
(In my setup the raid modules are compiled in the kernel)

/etc/initramfs-tools/modules
# List of modules that you want to include in your initramfs.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
dm-mod
loop

-> boot with a working kernel and recreate the initramfs file for the 3.10
kernel

2. It can fail because of switching from /dev/sd* to UID
-> check here the grub.cfg or menu.lst files in /boot/grub

3. I was getting frequently
root            (hd0,msdos1)

-> changed to (hd0,1) or whatever value matches your disk drive and
partition solves



For fixing issues with initramfs change the kernel command line in grub by
pressing 'e' to execute /bin/sh instead of init and debug

linux   /vmlinuz-3.10.9eko2 root=UUID=d48838a6-4c46-452a-xxxx-1fa624eb1c6e 
ro init=/bin/sh

you usually will load md-mod &friends, activate the lvm and mount the root
partition. You would then need to init the system

 mount -t ext3 -o ro /dev/mapper/root /new
 cd /new
 exec usr/sbin/chroot . /bin/sh <<- EOF >dev/console 2>&1
 exec /sbin/init ${CMDLINE}
 EOF

If you fail better press CTRL+ALT+DEL instead of exit

I hope this helps



Reply to: