[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: urgent RFH: please help test mdadm

On Tue, Feb 3, 2009 at 22:46, martin f krafft <madduck@debian.org> wrote:
> I found a bit of time to package up mdadm, which fixes two
> RC bugs, but I cannot test it. I've built unofficial packages for
> i386 and amd64 and put them at
>  http://debian.madduck.net/repo/pool/main/m/mdadm/
> so please try them out if you can, otherwise I won't be able to
> upload them soon, which might delay the lenny release.
> mdadm ( unstable; urgency=low
>  * New upstream release, created for Debian lenny:
>    - fixes assembly of arrays that are being reshaped (closes: #512475)
>    - this bug was also responsible for other assembly problems
>      (closes: #498505, #499643, #496334)
>    Again, many thanks to Neil Brown for being such an awesome upstream.

Hi Martin,

I've installed your unofficial x86-64 package, and tried to reproduce
the failure in #496334.
I think I reproduced the circumstances quite well (4 out of 6 devices
kicked out from a raid10 array). [*]
mdadm --assemble --force no longer segfaults, and reassembles the
array, so #496334  is fixed.
The reassembled array has the 4 drives as spares though they never
were spares, so the array is not functional (would need 3 active, has

In the end I had to recreate the array manually to make those "spare"
drives active again. I didn't loose any data, but I'm not too keen to
try reproducing this bug a third time.
Anyway mdadm was not segfaulting, so I could recover the array, unlike
with the buggy version that was segfaulting and I couldn't do much

[*] Here is how I made the array fail, DON'T TRY THIS:
# for i in /sys/class/scsi_host/*; do echo min_power
>$i/link_power_management_policy; done
# sync
# echo max_power >/sys/class/scsi_host/host2/link_power_management_policy
# sync
# dmesg ... see how the drives get kicked out, and all your
filesystems unmounted
.. reboot ..
mdadm --assemble --force /dev/mdX
.. array should be back again ...

Best regards,

Reply to: