[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Mdadm -- Restoring an array



On Tue, Oct 17, 2006 at 02:21:19PM -0400, Hal Vaughan wrote:
> On Tuesday 17 October 2006 11:15, michael wrote:
> > On Tue, 17 Oct 2006 03:07:01 -0400, Hal Vaughan wrote
> >
 > >
> > > Does anyone have experience rebuilding a mdadm RAID when the config
> > > info has been wiped out?  (I wouldn't think that would matter,
> > > since the mdadm config files that should have held the RAID info
> > > always seemed to be empty on my systems.)
 

I'm about to do a fresh install on a new computer and I was trying to
get a handle on LVM, RAID etc.  I got good answers including how to do
the restore not that long ago.  The thread was 'LVM root?' and
it was on the debian-amd64 list.  You can search the lists from the
debian website and get the answer there.  Lennart Sorensen told me how
to set it up and the trivial nature of fixing things if one drive fails.  

Basically it was that with one drive failed the system still works with
a degraded array.  The setup I'll be doing has two 80 GB Seagate SATA
Barracude 7200 disks both partitioned the same with corresponding
partitions in raid1.  The first partition is for /boot, the second
partition is for LV0 of VG0 which is then 'partitioned' by LVs for /,
/usr, /var, /home.

Here's the relevant section on how a restore works that I copied from
Lennart's post.  Read the whole thread to see my torturous learning
curve on this.

Good Luck,

Doug.

--- Lennart Sorensen's post:


> Can you give me either a URL or a thumbnail sketch of how to deal with a
> disk failure if I set it up as you suggest?

If a disk fails, mdadm will send an email about it (you can see it in
/proc/mdstat too).  You then shutdown at a convinient time, replace the
broken disk, boot up again and copy the partition table from the working
disk to the new disk (making the still working disk now be the first
disk makes this easier) using something like dd if=/dev/sda of=/dev/sdb
bs=512 count=1, and then reread the partition table with hdparm -z
/dev/sdb, then you ask mdadm to add the new partitions on the new sdb
using mdadm --add /dev/md0 /dev/sdb1; mdadm --add /dev/md1 /dev/sdb2

Then it will resync the mirror, and when done it will be all back to
normal.




Reply to: