[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

How to recreate a dmraid RAID array with mdadm (was: no subject)



> On Sun, 14 Nov 2010 06:36:00 +1100 <neilb@suse.de> wrote:
>> cat /proc/mdstat (showing what mdadm shows/discovers)
>>
>> Personalities :
>> md127 : inactive sda[1](S) sdb[0](S)
>> 4514 blocks super external:imsm
>>
>> unused devices:
>
> As imsm can have several arrays described by one set of metadata, mdadm
> creates an inactive arrive just like this which just holds the set of
> devices, and then should create other arrays made of from different regions
> of those devices.
> It looks like mdadm hasn't done that you. You can ask it to with:
>
> mdadm -I /dev/md/imsm0
>
> That should created the real raid1 array in /dev/md/something.
>
> NeilBrown
>

Thanks for this information, I feel like I am getting closer to getting this working properly. After running the command above (mdadm -I /dev/md/imsm0), the real raid 1 array did appear as /dev/md/*

ls -al /dev/md
total 0
drwxr-xr-x  2 root root   80 Nov 14 00:53 .
drwxr-xr-x 21 root root 3480 Nov 14 00:53 ..
lrwxrwxrwx  1 root root    8 Nov 14 00:50 imsm0 -> ../md127
lrwxrwxrwx  1 root root    8 Nov 14 00:53 OneTB-RAID1-PV -> ../md126

---------------

And the kernel messages:

[ 4652.315650] md: bind<sdb>
[ 4652.315866] md: bind<sda>
[ 4652.341862] raid1: md126 is not clean -- starting background reconstruction
[ 4652.341958] raid1: raid set md126 active with 2 out of 2 mirrors
[ 4652.342025] md126: detected capacity change from 0 to 1000202043392
[ 4652.342400]  md126: p1
[ 4652.528448] md: md126 switched to read-write mode.
[ 4652.529387] md: resync of RAID array md126
[ 4652.529424] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 4652.529464] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 4652.529525] md: using 128k window, over a total of 976759940 blocks.
 
---------------

fdisk -ul /dev/md/OneTB-RAID1-PV 

Disk /dev/md/OneTB-RAID1-PV: 1000.2 GB, 1000202043392 bytes
255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

                 Device Boot      Start         End      Blocks   Id  System
/dev/md/OneTB-RAID1-PV1              63  1953503999   976751968+  8e  Linux LVM

---------------

pvscan 

  PV /dev/sdc7      VG XENSTORE-VG      lvm2 [46.56 GiB / 0    free]
  PV /dev/md126p1   VG OneTB-RAID1-VG   lvm2 [931.50 GiB / 0    free]
  Total: 2 [978.06 GiB] / in use: 2 [978.06 GiB] / in no VG: 0 [0   ]

---------------

pvdisplay 

 --- Physical volume ---
  PV Name               /dev/md126p1
  VG Name               OneTB-RAID1-VG
  PV Size               931.50 GiB / not usable 3.34 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238464
  Free PE               0
  Allocated PE          238464
  PV UUID               hvxXR3-tV9B-CMBW-nZn2-N2zH-N1l6-sC9m9i

----------------

vgscan 

  Reading all physical volumes.  This may take a while...
  Found volume group "XENSTORE-VG" using metadata type lvm2
  Found volume group "OneTB-RAID1-VG" using metadata type lvm2

-------------

vgdisplay

--- Volume group ---
  VG Name               OneTB-RAID1-VG
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               931.50 GiB
  PE Size               4.00 MiB
  Total PE              238464
  Alloc PE / Size       238464 / 931.50 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nCBsU2-VpgR-EcZj-lA15-oJGL-rYOw-YxXiC8

--------------------

vgchange -a y OneTB-RAID1-VG

  1 logical volume(s) in volume group "OneTB-RAID1-VG" now active

--------------------

lvdisplay 

--- Logical volume ---
  LV Name                /dev/OneTB-RAID1-VG/OneTB-RAID1-LV
  VG Name                OneTB-RAID1-VG
  LV UUID                R3TYWb-PJo1-Xzbm-vJwu-YpgP-ohZW-Vf1kHJ
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                931.50 GiB
  Current LE             238464
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

------------------------

fdisk -ul /dev/OneTB-RAID1-VG/OneTB-RAID1-LV 

Disk /dev/OneTB-RAID1-VG/OneTB-RAID1-LV: 1000.2 GB, 1000190509056 bytes
255 heads, 63 sectors/track, 121599 cylinders, total 1953497088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbda8e40b

                             Device Boot      Start         End      Blocks   Id  System
/dev/OneTB-RAID1-VG/OneTB-RAID1-LV1              63  1953487934   976743936   83  Linux

-----------------------

mount -t ext4 /dev/OneTB-RAID1-VG/OneTB-RAID1-LV /mnt
mount
/dev/sdc5 on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sdc1 on /boot type ext2 (rw)
xenfs on /proc/xen type xenfs (rw)
/dev/mapper/OneTB--RAID1--VG-OneTB--RAID1--LV on /mnt type ext4 (rw)

-----------------

ls /mnt (and files are visible)

-------------------

And also when the array is running after manually running the command above, the error when updating the init ramdisk for kernels is gone....

update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64


-----------------

But the issue remain now is that the mdadm is not running the real raid1 array on reboots, the init ramdisk errors come right back unfortunately (enabled verbosity)....

1) update-initramfs -u -k all

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.


2) dpkg-reconfigure --priority=low mdadm [leaving all defaults]

Stopping MD monitoring service: mdadm --monitor.
Generating array device nodes... done.
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
Starting MD monitoring service: mdadm --monitor.
Generating udev events for MD arrays...done.


3) update-initramfs -u -k all [again]

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
-----------------

ls -al /dev/md/
total 0
drwxr-xr-x  2 root root   60 Nov 14 01:22 .
drwxr-xr-x 21 root root 3440 Nov 14 01:23 ..
lrwxrwxrwx  1 root root    8 Nov 14 01:23 imsm0 -> ../md127

-----------------


How does one fix the problem of not having the array not starting at boot?

The files/configuration I have now:

find /etc -type f | grep mdadm
./logcheck/ignore.d.server/mdadm
./logcheck/violations.d/mdadm
./default/mdadm
./init.d/mdadm
./init.d/mdadm-raid
./cron.daily/mdadm
./cron.d/mdadm
./mdadm/mdadm.conf

find /etc/rc?.d/ | grep mdadm
/etc/rc0.d/K01mdadm
/etc/rc0.d/K10mdadm-raid
/etc/rc1.d/K01mdadm
/etc/rc2.d/S02mdadm
/etc/rc3.d/S02mdadm
/etc/rc4.d/S02mdadm
/etc/rc5.d/S02mdadm
/etc/rc6.d/K01mdadm
/etc/rc6.d/K10mdadm-raid
/etc/rcS.d/S03mdadm-raid


cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1

--------------------


Again, How does one fix the problem of not having the array not starting at boot?



Thanks.
 

-M
 		 	   		  

Reply to: