[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Talking about RAID - disks with same id



On 11/08/17 23:17, deloptes wrote:
David Christensen wrote:
While trouble-shooting PEBKAS issues is important to me, I have found
that my attempts at trouble-shooting GNU/Linux issues is usually an
exercise in futility.  The best I can hope for is finding a way to
reproduce the issue and filing a bug report.  But as for fixing an
issue, my best bets is fresh software and known-good hardware.
Hi David and thanks for sharing your experience. However it does not bring
an answer to my question.
I personally never had problems with fixing issues and I must admit the
community is often more helpful than some commercial companies.
Since Etch I never had to reinstall my servers upgrade worked more or less
pretty well. Of course performing an upgrade on a test machine is a must.

What I want to know is if this

# blkid /dev/sdf1
/dev/sdf1: UUID="5427071b-25c8-fff8-476d-ff8c9852b714"
TYPE="linux_raid_member" PARTUUID="13e17ac7-01"
# blkid /dev/sdg1
/dev/sdg1: UUID="5427071b-25c8-fff8-476d-ff8c9852b714"
TYPE="linux_raid_member" PARTUUID="13e17ac7-01"

has some effect

Answering that question definitively would involve reviewing the source code of all the software and firmware on your computer for anything that is affected, directly or indirectly, by UUID's or PARTUUID's.


You should probably start with the source code for whatever RAID technology you are using. (What RAID technology are you using?)


Alternatively, look for a FAQ.


STFW.  Ideally, using any warnings or error messages you are seeing.


and I should replace one of the disks. I think the Seagate
is >10y old.

Take a look at:

# smartctl --xall /dev/sdg


If you learn smartctl well enough, capture reports on a schedule (weekly?), and look for trends, you might be able to predict failure. STFW for information on this approach.


Download the bootable CD image of Seagate Seatools and run it:

    https://www.seagate.com/support/downloads/seatools/


The group consensus seems to be:

1.  When you hear the "click of death", failure is imminent.

2. When you put HDD's on the shelf for long periods, they often fail shortly after being returned to service (e.g. within a day).

3. Hard disk drives last the longest if you leave them in a computer and powered up, even if not in use.

4.  All drives fail eventually.  Plan on it and be prepared.


I would estimate a dozen of my HDD's have failed by #1 over the years, and another dozen by #2. I've got a dozen or more on the shelf that could end up #2.


David


Reply to: