[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#791794: RAID device not active during boot



The problem might be related to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=789152. However, in my case everything seems to be fine as long as all harddisks (within the RAID) are working. The Problem appears only if during boot one (or more) disk(s) out of the RAID device have a problem.

The problem might be related to the fact that jessie comes with a new init system which has a stricter handling of failing "auto" mounts during boot. If it fails to mount an "auto" mount, systemd will drop to an emergency shell rather than continuing the boot - see release-notes (section 5.6.1): https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

For example:
If you have installed your system to a RAID1 device and the system is faced with a power failure which (might at the same time) causes a damage to one of your harddisks (out of this RAID1 device) your system will (during boot) drop to an emergency shell rather than boot from the remaining harddisk(s). I found that during boot (for some reason) the RAID device is not active anymore and therefore not available within /dev/disk/by-uuid (what causes the drop to the emergency shell).

A quickfix (to boot the system) would be, to re-activate the RAID device (e.g. /dev/md0) from the emergency shell ...

mdadm --stop /dev/md0
mdadm --assemble /dev/md0

... and to exit the shell.

Nevertheless, it would be nice if the system would boot automatically (as it is known to happend under wheezy) in order to be able to use e.g. a spare disk for data synchronization.


Reply to: