[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID5 problem.



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Alex Samad wrote:
> On Sun, Jun 22, 2008 at 12:28:03PM -0400, Matt Gracie wrote:
> 
>> [snip]
> 
> The problem is that when I tried, using "mdadm /dev/md0 --add
> /dev/sdd1", the rebuild would kick off and then fail after a short time,
> marking all four drives as faulty.
> 
> So I rebooted, running the RAID in degraded mode, fdisked and mkfsed
> /dev/sdd1 again, and tried again. Same result.
> 
> I replaced the firewire PCI card. Same result.
> 
> 
>> [snip]
> 
> Does anyone have any suggestions for cleaning up my degraded RAID set so
> that I can put another drive in the pool? All of my data seems to be
> okay (although you'd better believe I've backed the irreplaceables up
> again to be sure), but I really don't want to deal with another disk
> failure.
> 
>> sounds like you have done all the things I would have done.
> 
>> can you post a cat /proc/mdstat


mogwai:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[0] sdc1[2] sda1[1]
      732587712 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

unused devices: <none>

> 
>> and a 
> 
>> mdadm --query --detail /dev/md0

mogwai:~#  mdadm --query --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Sun Mar 30 14:12:22 2008
     Raid Level : raid5
     Array Size : 732587712 (698.65 GiB 750.17 GB)
  Used Dev Size : 244195904 (232.88 GiB 250.06 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jun 23 18:10:12 2008
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 36c5a427:cb5a2218:4a8af7ff:69db778e
         Events : 0.13586

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8        1        1      active sync   /dev/sda1
       2       8       33        2      active sync   /dev/sdc1
       3       0        0        3      removed




The problem (where all the drives are marked faulty) seems to be a
result of large amounts of activity on the FW bus; when I attempted to
back up some stuff from the degraded RAID set onto another firewire
drive, the RAID failed and everything was marked faulty. When I backed
up the same directory onto an internal IDE drive, however, there was no
problem.

So, I think this might be somehow specific to the interaction between
software RAID and the Firewire subsystem. For the record:

mogwai:~# uname -a
Linux mogwai 2.6.25-2-686 #1 SMP Tue May 27 15:38:35 UTC 2008 i686 GNU/Linux


- --Matt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkhgalQACgkQ6wBqCH7pnjIhUACfeUOgAKKG/J7ugwHWfmf4vqaU
HfQAoN4apPeQhDx9Uj7Dm7MizALFlGc8
=GfBQ
-----END PGP SIGNATURE-----


Reply to: