[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots



On 16/06/2010 15:50, Steven wrote:
> 
> On Wed, June 16, 2010 15:47, Michal wrote:
>>
>> One way is to label the disks themselves so you simply do;
>>
>> cat /proc/mdstat which might say /dev/sd3 is down. Open the case, look
>> for the disk labled /dev/sde and replace it. If you have LED's like
>> servers have (probably not) they can be a fiddle to get working but it's
>> possible
>>
> No LED's for drives, it already has them for every pci slot,
> looks like a Christmas tree :)
> 
> I think you meant /dev/sde instead of sd3, right? If not, please correct me.
> If I'm not mistaken, mdadm will report the broken drive,
> then I have to look for the drive that corresponds to the 4th sata slot on
> the motherboard.
> That's part of my issue, can I be sure that the drive connected to port 4
> is /dev/sde?
> It's not a problem for the other 2 drives, as they differ in capacity,
> but these 4 are exactly the same size.
> 
> Also how accurate is mdadm in identifying the failed drive?
> As there are only 2 in an array, there is only 1 copy of the data to
> compare to.
> 
> It also seems my last message was sent twice, sorry about that.
> 

Sorry I really didnt explain my self propely;

Yes I mean /dev/sde and by lable I mean get a lable machine (or
somehting similar) to put a physical lable on the drive, like a sticker
with text saying /dev/sde

I did this in one machine and simply built my RAID1 array across two
drives, disconnected a drive, booted back up check mdstat to see which
one was now disconnected and labled that one, then labled the second
one. It's not a brilliant way I will admit but it works perfectly well.
I tested it 3 times (connecting the drive back, rebuild array,
disconnecting the other drive etc) to really make sure I had labled them
correctly.


Reply to: