[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: How to recreate a dmraid RAID array with mdadm



> On Wed, 17 Nov 2010 14:15:14 +1100 <neilb@suse.de> wrote:
>
> This looks wrong. mdadm should be looking for the container as listed in
> mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.
>
> Can you:
>
> mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
>
> so I can compare the uuids?
>

Sure.

# definitions of existing MD arrays ( So you don't have to scroll down :P )


ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383

ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

mdadm -E /dev/sda /dev/sdb

/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:62659383
       Checksum : 2f91ce06 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : STF604MH0J34LB
          State : active
             Id : 00020000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c56497a
     RAID Level : 1
        Members : 2
          Slots : [UU]
      This Slot : 0
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : STF604MH0PN2YB
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:62659383
       Checksum : 2f91ce06 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk01 Serial : STF604MH0PN2YB
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c56497a
     RAID Level : 1
        Members : 2
          Slots : [UU]
      This Slot : 1
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk00 Serial : STF604MH0J34LB
          State : active
             Id : 00020000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

----------------------------------
cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1


-M
 		 	   		  

Reply to: