Bug#461644: linux-image-2.6.18-5-xen-686: Exporting an lvm-on-md LV to Xen as a disk results in kernel errors and corrupt filesystems
Package: linux-image-2.6.18-5-xen-686
Version: 2.6.18.dfsg.1-17
Severity: normal
I have several machines using LVM on (md) raid10 on 4 SATA disks. When
I create an LV and export it to a Xen domU as a whole disk, as soon as
that domU tries to partition or write to the disk I get many of these
errors in dom0:
Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 309585274 4
Jan 19 04:42:47 corona last message repeated 2 times
Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 305922559 4
Jan 19 04:42:47 corona last message repeated 3 times
Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 309585274 4
Jan 19 04:42:48 corona last message repeated 2 times
Continued attempts to use the disk in the domU results in i/o error and
the partition being remounted read-only.
This does not occur when the LV is exported as a block device. It
happens on all of my machines which run Etch and LVM on software RAID.
It does not happen on my machines which run Etch and LVM on hardware
RAID. All of my software RAID boxes use raid10, so I haven't had chance
to try it in different RAID levels.
Also please note that although below says I am running 2.6.18-4-xen-686,
that's because I rebooted into it to see if the problem had been
introduced in -5. It happens in both -4 and -5.
/etc/mdadm/mdadm.conf:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST corona
# instruct the monitoring daemon where to send mail alerts
MAILADDR root@example.com
# definitions of existing MD arrays
ARRAY /dev/md1 level=raid10 num-devices=4 UUID=6eb313b7:a4cb7ef0:511ed56b:bbe6f809
ARRAY /dev/md2 level=raid1 num-devices=4 UUID=d4712241:f57750d0:a05dcc72:f711af28
ARRAY /dev/md3 level=raid10 num-devices=4 UUID=bfc032df:afb6b003:57c24ef0:12cc371f
ARRAY /dev/md5 level=raid10 num-devices=4 UUID=75de1b2e:db3428ca:a3d746a5:86201486
# This file was auto-generated on Wed, 04 Apr 2007 02:25:06 +0000
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
My single LVM PV is on /dev/md5:
$ sudo mdadm -D /dev/md5
/dev/md5:
Version : 00.90.03
Creation Time : Wed Apr 4 00:07:31 2007
Raid Level : raid10
Array Size : 618727168 (590.06 GiB 633.58 GB)
Device Size : 309363584 (295.03 GiB 316.79 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Sun Jan 20 04:37:28 2008
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2, far=1
Chunk Size : 64K
UUID : 75de1b2e:db3428ca:a3d746a5:86201486
Events : 0.453
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
-- System Information:
Debian Release: 4.0
APT prefers stable
APT policy: (500, 'stable')
Architecture: i386 (i686)
Shell: /bin/sh linked to /bin/bash
Kernel: Linux 2.6.18-4-xen-686
Locale: LANG=en_GB.UTF-8, LC_CTYPE=en_GB.UTF-8 (charmap=UTF-8)
Versions of packages linux-image-2.6.18-5-xen-686 depends on:
ii initramfs-tools 0.85h tools for generating an initramfs
ii linux-modules-2.6.18-5- 2.6.18.dfsg.1-17 Linux 2.6.18 modules on i686
Versions of packages linux-image-2.6.18-5-xen-686 recommends:
ii libc6-xen 2.3.6.ds1-13etch4 GNU C Library: Shared libraries [X
-- no debconf information
Reply to: