[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Debian 7.1 on QNAP TS-412 mdadm always reassembles without drive in 4th bay



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello everyone,

After having successfully installed Debian on a QNAP TS-410, I am now
experiencing trouble trying the same on a TS-412 with 4 1TB drives
(Hitachi HDS721010DLE630).

I followed the steps given at
http://www.cyrius.com/debian/kirkwood/qnap/ts-41x/install/
and chose the following scheme for the hard drives:
/dev/sda1  2048  1953525167   976761560   fd  Linux raid autodetect
/dev/sdb1  2048  1953525167   976761560   fd  Linux raid autodetect
/dev/sdc1  2048  1953525167   976761560   fd  Linux raid autodetect
/dev/sdd1  2048  1953525167   976761560   fd  Linux raid autodetect

I created a RAID5 across these 4 disks.
The resulting /dev/md0 device is used as a physical volume for lvm2,
added to a volume group and used for three logical volumes:
root (8GB)
swap (1GB)
data (the rest)

Installation went fine and the debian-installer finished without any
warning or alert, but upon reboot, the array is always degraded.
Specifically: It will even be degraded if I let the installer complete
building it before, at the end of the install process, rebooting.

Every time I reboot, I see something like this in dmesg output:
- -- snip --
[   13.785639] xor: using function: arm4regs (1091.600 MB/sec)
[   13.785697] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4010000
action 0xe frozen
[   13.785724] ata4: edma_err_cause=00000010 pp_flags=00000000, dev
connect
[   13.785749] ata4: SError: { PHYRdyChg DevExch }
[   13.785786] ata4: hard resetting link
[   13.795349] md: raid6 personality registered for level 6
[   13.795382] md: raid5 personality registered for level 5
[   13.795399] md: raid4 personality registered for level 4
[   13.796343] bio: create slab <bio-1> at 1
[   13.796404] md/raid:md0: device sdb1 operational as raid disk 1
[   13.796426] md/raid:md0: device sda1 operational as raid disk 3
[   13.796446] md/raid:md0: device sdc1 operational as raid disk 2
[   13.797540] md/raid:md0: allocated 4218kB
[   13.797664] md/raid:md0: raid level 5 active with 3 out of 4
devices, algorithm 2
- -- snip --

This happens no matter which drive I put into which bay. It is always
bay 4 that fails. I cannot reproduce this on my other QNAP, which is a
TS-410, though, not a TS-412.
Also, I tried other hard drive models and makes, but they all behave
the same way.

Now, since everything works fine whenever I switch back to the QNAP
firmware, this does not look like a straightforward hardware fault.
However, I just don't know how to start debugging this.

If anyone could give me some advice as to what I am missing or what
else I could try, please let me know.

Best,

Martin
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.20 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRzs0+AAoJEP0QXR2ClIJ9j1QIALRwOCvmpw8sN9q2gHB9N83c
Zr1VPgJYcW0WLBeIlBiQwrONQBp1R+R7Wo2dWmMDvdwJEQ3LZqYWF9U51+uTLqGb
G+PzL67pVA8nQ5sdhS60JVfy3fgLgI4UeBxqtdX7R4zd772BOS/9Qo5xEpZXtvtb
9qkQs/OxNyNzefNwRt2dSD5D6CliCEa57ty+S7HblehVHc4ILr5/3NwrILdA/LIT
oFpF0IaNqKv9rw5anxJMWISGgstexKYGc4GOf4FOi93Epre3OJn66ATs9kxlCKzq
qxdlT01Owiw1hqg7NroMwisNSXI+yyJ6ODpDsifAM28h2qlQAMudtDX9fKz6rUA=
=Q83L
-----END PGP SIGNATURE-----


Reply to: