Bug#624343: Possible workaround?
I've hit this bug from a different scenario - I have one SATA disk and
one external USB disk in a root on RAID1+LVM setup. During boot it
seems the USB device often doesn't settle before the md device gets to
being assembled, with the net result that it boots degraded, with the
USB device missing. No big deal I figured, I can always re-add the USB
disk later and let it re-sync as required. Until I saw the discussion
on this bug report I'd assumed that it was only a performance warning
and not a potential data loss scenario though.
If I've understood this correctly one possible workaround for this
(for the time being) would be to add a boot parameter that lets you
artificially limit max_hw_sectors? In this case it seems forcing all
md devices down from 248 to 240 would probably avoid potential data
loss issues without large performance degradation or big intrusive
changes. Is that sane?