RAID5 (mdadm) array hosed after grow operation (there are two of us)
Hoping somebody might be able to provide me with some pointers that
may just help me recover a lot of data, a home system with no backups
but a lot of photos, yes I know the admin rule, backup backup backup,
but I ran out of backup space (not a good excuse).
I saw a few months back that somebody did the same thing as me,
although I'm hoping mine might be a little bit more recoverable.
My MD Raid 5 set was getting dangerously short on space so I purchased
an additional drive the same size as the existing drives to add to the
set.
I did a SATA hot plug to add to the drive and issued a SCSI bus rescan
command to make it show up.
I then did a "mdadm --add /dev/md0 /dev/sde" and then "mdadm --grow
/dev/md1 --raid-devices=4" lastly I updated the /etc/mdadm.conf to
incorporate the new drive (changed ARRAY /dev/md0
devices=/dev/sdb,/dev/sdc,/dev/sdd to ARRAY /dev/md0
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde)
The next morning when I woke up mdadm had worked its magic and I had a
bigger RAID5 set, I use whole drives so I don't partition them first,
I'm hoping this isn't my first mistake.
I also run LVM2 ontop of my RAID5 set so I then issued a pvresize
command, this seemed to work with no problems, lastly I performed a
vgextend if memory serves correctly, although I think it didn't grow
as the pvresize had already taken care of that.
This was all on a stock 2.6.27.7 kernel downloaded from kernel.org and
with no patches.
Today I went to reboot, and the first thing I found was that one of
the drives in the MD set was marked as failed, however, a quick check
with smartmontools showed it to be fine so I added back into the RAID5
set (mdadm -a /dev/md0 /dev/sde).
Now I cannot recover my LVM2 volume group, every command I issue
returns a result with "Incorrect metadata area header checksum"
somewhere in the output.
I've tried to perform a vgcfgrestore of the last backed up config set
but it refuses to apply it.
If anyone can make any suggestions as to how I might begin to recover
some of my data I would be extremely grateful.
Thank you ever so much
Seri
Some command outputs follow if they might help:
-------------------------------------------------------------------------------------------------------
mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Jul 16 20:39:56 2008
Raid Level : raid5
Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Apr 20 19:54:18 2009
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 31% complete
UUID : 39e0de83:f3a113aa:e7aaf5bb:f7cc79a2
Events : 0.330686
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 16 1 active sync /dev/sdb
2 8 48 2 active sync /dev/sdd
4 8 64 3 spare rebuilding /dev/sde
-------------------------------------------------------------------------------------------------------
vgdisplay -v
Finding all volume groups
Incorrect metadata area header checksum
Finding volume group "vg_raid5"
Incorrect metadata area header checksum
Incorrect metadata area header checksum
WARNING: Volume group "vg_raid5" inconsistent
--- Volume group ---
VG Name vg_raid5
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 14
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.36 TB
PE Size 4.00 MB
Total PE 357704
Alloc PE / Size 289669 / 1.10 TB
Free PE / Size 68035 / 265.76 GB
VG UUID SBmEUc-cYR1-ee2H-gH3K-1jE9-rQrw-3spchH
--- Logical volume ---
LV Name /dev/vg_raid5/vmware
VG Name vg_raid5
LV UUID f1a43y-S3nj-Pim1-E0vE-m3E7-v2R2-2pNmSe
LV Write Access read/write
LV Status NOT available
LV Size 1.10 TB
Current LE 288389
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Name /dev/vg_raid5/store
VG Name vg_raid5
LV UUID nec9VW-bWKz-HfZ7-8Wm3-kDhM-7lBu-2fbjbf
LV Write Access read/write
LV Status NOT available
LV Size 5.00 GB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
--- Physical volumes ---
PV Name /dev/md0
PV UUID lG8t3z-VzVU-JI6C-w8q0-6NnE-a2xH-WEDfDr
PV Status allocatable
Total PE / Free PE 357704 / 68035
-------------------------------------------------------------------------------------------------------
Reply to: