On Monday 20 August 2007, martin f krafft wrote:
> also sprach Hal Vaughan <firstname.lastname@example.org> [2007.08.20.2114 +0200]:
> > It did on the first failure. Then another failed and I turned the
> > machine off. When I got 2 more drives, I put them in and it
> > rebuilt the array using 3 of the drives with one as a spare. Then
> > when it failed this time, it had never started rebuilding the
> > spare.
> This situation *should* be recoverable. Contact me off-list if you'd
> be willing to let me log in as root and have a look.
I may end up decding on that. Right now I'm considering replacing the 250GB drives that seemingly have failed with new 320s, wiping the install and installing Etch, then just rebuilding the entire array.
Any suggestions or warnings from others so I can make sure this doens't happen again are appreciated. Remember, the two drives I've already removed that mdadm had said were bad have tested out as fine. I suspect it's more an issue with this system losing power and the RAID not being unmounted properly, but I'd think it should be able to handle that.
> > I've noticed, though, that on one system I had originally defined
> > the raid using /dev/hde1, hdf1, and so on. When I tried to rebuild
> > it with /dev/hde, hdf, and so on, it would not rebiuld.
> Sure, partitions have different offsets, so the superblock could not
> be found.
> > > Have you inspected the smartctl output and checked for SMART
> > > errors?
> > I looked at the logs. Is this a different output and where would
> > I find it?
> Are these ATA disks? if so, run smartctl -l error should be pretty
[root@archive:root]$ smartctl -l error
bash: smartctl: command not found
Is there a problem with that, or could that be part of the issue?