[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#461644: marked as done (linux-image-2.6.18-5-xen-686: Exporting an lvm-on-md LV to Xen as a disk results in kernel errors and corrupt filesystems)



Your message dated Mon, 15 Feb 2010 20:18:14 +0100
with message-id <20100215191814.GN9624@baikonur.stro.at>
and subject line Re: Xen || vserver troubles
has caused the Debian Bug report #461644,
regarding linux-image-2.6.18-5-xen-686: Exporting an lvm-on-md LV to Xen as a disk results in kernel errors and corrupt filesystems
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
461644: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=461644
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: linux-image-2.6.18-5-xen-686
Version: 2.6.18.dfsg.1-17
Severity: normal

I have several machines using LVM on (md) raid10 on 4 SATA disks.  When
I create an LV and export it to a Xen domU as a whole disk, as soon as
that domU tries to partition or write to the disk I get many of these
errors in dom0:

Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 309585274 4
Jan 19 04:42:47 corona last message repeated 2 times
Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 305922559 4
Jan 19 04:42:47 corona last message repeated 3 times
Jan 19 04:42:47 corona kernel: raid10_make_request bug: can't convert block across chunks or bigger than 64k 309585274 4
Jan 19 04:42:48 corona last message repeated 2 times

Continued attempts to use the disk in the domU results in i/o error and
the partition being remounted read-only.

This does not occur when the LV is exported as a block device.  It
happens on all of my machines which run Etch and LVM on software RAID.
It does not happen on my machines which run Etch and LVM on hardware
RAID.  All of my software RAID boxes use raid10, so I haven't had chance
to try it in different RAID levels.

Also please note that although below says I am running 2.6.18-4-xen-686,
that's because I rebooted into it to see if the problem had been
introduced in -5.  It happens in both -4 and -5.

/etc/mdadm/mdadm.conf:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST corona

# instruct the monitoring daemon where to send mail alerts
MAILADDR root@example.com

# definitions of existing MD arrays
ARRAY /dev/md1 level=raid10 num-devices=4 UUID=6eb313b7:a4cb7ef0:511ed56b:bbe6f809
ARRAY /dev/md2 level=raid1 num-devices=4 UUID=d4712241:f57750d0:a05dcc72:f711af28
ARRAY /dev/md3 level=raid10 num-devices=4 UUID=bfc032df:afb6b003:57c24ef0:12cc371f
ARRAY /dev/md5 level=raid10 num-devices=4 UUID=75de1b2e:db3428ca:a3d746a5:86201486

# This file was auto-generated on Wed, 04 Apr 2007 02:25:06 +0000
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $

My single LVM PV is on /dev/md5:

$ sudo mdadm -D /dev/md5
/dev/md5:
        Version : 00.90.03
  Creation Time : Wed Apr  4 00:07:31 2007
     Raid Level : raid10
     Array Size : 618727168 (590.06 GiB 633.58 GB)
    Device Size : 309363584 (295.03 GiB 316.79 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Sun Jan 20 04:37:28 2008
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2, far=1
     Chunk Size : 64K

           UUID : 75de1b2e:db3428ca:a3d746a5:86201486
         Events : 0.453

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5
       2       8       37        2      active sync   /dev/sdc5
       3       8       53        3      active sync   /dev/sdd5

-- System Information:
Debian Release: 4.0
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: i386 (i686)
Shell:  /bin/sh linked to /bin/bash
Kernel: Linux 2.6.18-4-xen-686
Locale: LANG=en_GB.UTF-8, LC_CTYPE=en_GB.UTF-8 (charmap=UTF-8)

Versions of packages linux-image-2.6.18-5-xen-686 depends on:
ii  initramfs-tools         0.85h            tools for generating an initramfs
ii  linux-modules-2.6.18-5- 2.6.18.dfsg.1-17 Linux 2.6.18 modules on i686

Versions of packages linux-image-2.6.18-5-xen-686 recommends:
ii  libc6-xen              2.3.6.ds1-13etch4 GNU C Library: Shared libraries [X

-- no debconf information



--- End Message ---
--- Begin Message ---
the 2.6.18 linux images from Etch are no longer supported, thus closing
this bug report.  As both Xen or vserver stayed out of tree it is very
unlikely that they improved a lot since.

With modern hardware kvm or lxc (linux containers) are recommended.
if you still haven't upgraded to Lenny please notice that Etch has
no security support any more as of today:
http://www.debian.org/News/2010/20100121


if you can reproduce said bugs with 2.6.32 linux images from
unstable please shout on said box and bug can be reopened:
reportbug -N <bugnr>

thank you for your report.



--- End Message ---

Reply to: