You have to be pretty unlucky to get a bad drive (or controller) - but it happens. However, I've seen problems like these come up many times and usually it's because the power supply simply cannot meet the needs of everything connected to it.I was starting to suspect the PS, too, but wasn't sure how to go about monitoring it. Fortunately, I've got a Kill-A-Watt, so I can see how much power the PS is drawing from the wall. I've also got one of those ATX power-supply testers, but I don't know what kind of load it places on the PS, so I don't know if it will tell me much about whether the PS is just not able to pump out enough juice for all of the drives.
Still, I'm mindful of David Agans' debugging advice: "Make it Fail". I'd like to have a smoking gun. If it *is* the PS, then I'd like to actually see the voltages drop when the system is under load. Any suggestions on how to go about that? Got any suggestions for one of those front-panel LCD dealies, or should I just go with software, with something like lm-sensors?
As for the disks, I'd suggest testing them individually rather than trying to test them in a RAID or even while connected to the RAID controller.Just to clarify, the controller isn't doing the RAID... it's the Linux md driver(s). So, I want to test them *in* and *out* of the RAID just to make sure that it's not some kooky problem with the RAID layer.
Description: S/MIME Cryptographic Signature