[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [SOLVED] Re: Raid1 with disk fails not boot



On 14/08/15 03:25 AM, Jose Legido wrote:
On Thu, Aug 13, 2015 at 7:59 PM, Gary Dale <garydale@torfree.net> wrote:
On 13/08/15 06:01 AM, Jose Legido wrote:
Hello!
I have a software RAID1 with 2 discs. When I get out one disk, the
system not boot directly, I have to do mannually actions:

Loading, please wait...
Gave up waiting for root device. Common problems:
   - Boot args (cat /proc/cmdline)
     - Check rootdelay: (did the system wait long enough?)
     - Check root= (did the system wait for the right device?)
   - Missing modules (cat /proc/modules: ls /dev)
ALERT! /dev/disk/bu-uuid/c511cf66-d987-477e-96fe-a5fc350d1bB4 does not
exist.
Dropping to a shell!
modprobe: module ehci-orion not found  in modules.dep

BusyBox v1.22.1 (Debian 1:1.22.0-9+deb8u1) built-in shell (ash)
Enter 'help' for a list at buil-in commands.

bin/sh: can't access tty; job control turned off
(initramfs)

I look the raid:
(initramfs) cat /proc/mdstat

md0: inactive sda2[0](S)
          8090624 blocks super 1.2
unused devices: <none>

(initramfs) mdadm --detail /dev/md0
/dev/md0:
    Version: 1.2
    Raid Level: raid0
    Total Devices: 1
    Persistence: Superblock is persistent

    State: inactive

    Name: debian:0
    UUID: 688aa204:88cf0db8:4a94610d:a3e3dc03
    Events: 108

    Number  Major  Minor  RaidDevice
        -            8        2            -              /dev/sda2


If I restart the RAID, the servers run:

(initramfs) mdadm --stop /dev/md0
mdadm: stopped /dev/md0

(initramfs) mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 1 drive (out of 2).

The server boots:

# cat /proc/mdstat
Personalities: [raid1]
md0: active raid1 sda[0]
           8090624 blocks super 1.2 [2/1] [U_]
unused devices: <none>


# mdadm --detail /dev/md0
/dev/md0:

Version: 1.2
Creation Time: Wed Aug 12 10:11:58 2015
Raid Level: raid1
Array Size: 8030524 (7.72 GiB 8.28 GB)
Used Dev Size: 8030524 (7.72 GiB 8.28 GB)
Raid Devices: 2
Total Devices : 1
Persistence: Superblock is persistent

update T1me : Thu Aug 13 05:17:41 2015
State:  clean, degraded
Active Devices: 1
Working Devices: 1
Failed Deviices: 0
Spare Devices: 0

Name : debian:0 (local to host debian)
UUID: 688aa204:88cf0dfb8:4a94610d:a3a3dc03
Events : 154

Number Major Minor RaidDevice State
      0          8        2         0           active sync   /dev/sda2
      2          0         0        2            removed

I restart the server, add the disk and rebuild RAID:
# mdadm /dev/md0 -a /dev/sdb2

And it  works again

How can I boot without mannually action?

Thanks!

A quick search with Google comes up with
http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot
which seems to be exactly what you want.
Thank Gary.
I have read this post before ask, but I see is for ubuntu and not
apply for debian. I  try to apply and not work
I read twice the post and apply correctly and works, a lot of thanks!!!

Maybe is a bug? How can I report to debian?

Thanks!
reportbug mdadm

This is the normal way to report bugs with Debian. Run reportbug <package name> as a normal user.


Reply to: