[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Mdadm -- Restoring an array



On Tuesday 17 October 2006 11:15, michael wrote:
> On Tue, 17 Oct 2006 03:07:01 -0400, Hal Vaughan wrote
>
> > I have a server with one drive that has the boot and system on it
> > and a RAID5 device managed by mdadm.  The RAID is made up of 3 hard
> > drives with a 4th spare also hooked up.
> >
> > The system drive crashed and I restored it.  The problem is in the
> > past, when I've tried to restore a RAID, I've had trouble with it.
> >  The man page for mdadm makes it look like a RAID can be
> > reassembled with just a --assemble option given on the command
> > line, but it keeps asking for more information.  I thought there
> > was a scan mode, to tell mdadm to scan local drives and re-assemble
> > an existing RAID. I've tried different options previously to
> > restore a mdadm RAID, but had trouble.
> >
> > There is no data on this drive that can't be reconstructed, but to
> > do so would be a bit of a pain and take time (and the backup system
> > for this RAID was still experimental).
> >
> > Does anyone have experience rebuilding a mdadm RAID when the config
> > info has been wiped out?  (I wouldn't think that would matter,
> > since the mdadm config files that should have held the RAID info
> > always seemed to be empty on my systems.)
>
> It can be a little tricky sometimes with SW raid when a drive dies.
> Especially with Sata Disks, but the mdadm program is quite smart.
> I've always used a .conf file even if one wasn't created.
> # mdadm --detail --scan |grep ARRAY > /etc/mdadm/mdadm.conf
> Then edited the file and added
> DEVICE partitions
> DEVICE /dev/md*

They're all good ol' IDE ATA drives.

The problem here is that any .conf files I had would be gone at this 
point, since it was / (which includes /boot) that crashed.

I finally found one page that suggested:

mdadm --examine --scan {list of devices}

I had it scan all 4 drives used in the RAID, including the spare.  It 
reported one drive as being the RAID, but didn't list the others at 
all.  I'm not sure if that's enough to give it so it can rebuild the 
RAID, then I can just add the extra drive to it afterwards.

I tried your line:

mdadm --detail --scan |grep ARRAY

but that doesn't work because the device, at this point, does not exist.  
All that info was wiped when the system drive died.

I was thinking of trying to use

mdadm --assemble --scan {list of devices}

since I've seen a few references to that, even in the man page, but it's 
unclear if that is exactly what I need at this point and I don't want 
to do anything with it until I'm *sure* that's what I need to do, since 
doing the wrong thing could wipe out what's still there.

...
> Fortunetly, I haven't had to rebuild an array yet without a conf
> file, but as you mentioned, it should be possible as the raid info is
> stored in the disks mbr. (or somewhere like that)

In the future, I'll build my own .conf files just to be sure.  In the 
long run, I'm just going to find a RAID controller that does hardware 
RAID5, preferably one that's hot swappable, and just rebuild the RAID 
on that.

> I also like to run / from a raid array. /boot is a mirror,
> and so any faulty drive will keep the system bootable.

Are you saying / includes a /boot partition and you still have a 
separate /boot partition?  That's a cool way to set it up.  I've got 
some extra drives and might be able to work out something similar.  
Without the /boot mirror, will Linux be able to boot the RAID directly?  
I didn't think that was possible.

Hal



Reply to: