[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID5 (mdadm) array hosed after grow operation (there are two of us)



Alex Samad wrote:
On Mon, Apr 20, 2009 at 08:26:21PM +0100, Seri wrote:
Hoping somebody might be able to provide me with some pointers that
may just help me recover a lot of data, a home system with no backups
but a lot of photos, yes I know the admin rule, backup backup backup,
but I ran out of backup space (not a good excuse).

not sure about the LVM side of things (fix the raid bit first
hopefully).

I would guess, that you didn't do a update initrd, thus the mdadm.conf
on your initrd is the old one with the 3 disk raid 5 instead of the 4
disk. fix that first, then on reboot append to the kernel boot option
init=/bin/bash.

This will drop you out of the process before everying thing else
happens, you should have root mounted.  Check your md's make sure they
have come up okay first.

as for the lvm you might be lucky if lvm hasn't started because of the
error then you might not have lost anything.

I just got badly bit by this. I had root on lvm on md (RAID 1). After one of the component drives died, lvm came back up on top of the other component drive - during boot from initrd - making it impossible to rebuild the RAID array (the component drive with all the data was already mounted).

Ended up hosing my o/s, but luckily not the data. Ended up booting from a livedisk copying the data to backup, then rebuilding everything from scratch.

Learned my lesson though - no real reason to have root on lvm - it's now on 3-disk RAID 1.

--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra



Reply to: