[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: XFS + LVM + RAID10 = raid0_make_request bug



On Wed, Jul 13, 2005 at 10:09:31PM -0400, I wrote:
> I've seemingly hit a problem trying the combination of xfs on lvm on
> raid10 (software)
> 
> I've currently got 2 disks running (hda/hdc) and I've added two more
> (hdi/hdk).
> 
> My intention was to build a raid10 array. I made hdi1 a spare for md0,
> and hdi2 a spare for md1. I made hdi3 and hdk3 a raid0 md2. I then made
> md4 out of md2 and missing (md3 will be hda3 and hdc3). All fine.

Unfortunately, I had this backwards, and I built a raid 0+1 array. I've
since rebuilt the array as raid0 (md5) over 3 raid1 (md2/3/4) arrays.

> /proc/mdstat looks good, md4 is in degraded mode. The next step was
> supposed to be make a fs on the lvm on raid10, copy what I can from the
> corrupt old lvm partition, and then reformat hda3/hdc3 as raid devices
> and raid 0 them (md3) and then add them into the raid1 (md4). First
> problem, I had to reset the md detection flag in lvm.conf (lvm2). I
> eventually was able to pvcreate, vgcreate, and lvcreate. When I went to
> mkfs.xfs the new logical volume, I ran into:
> 
> raid0_make_request bug: can't convert block across chunks or bigger than 64k
> 
> 
> google reveals that error during kernel 2.5.x -> 2.6.x, back in 2003. It
> seems to have been a problem with xfs on top of software raid0.
> 
> http://www.ussg.iu.edu/hypermail/linux/kernel/0310.2/0982.html
> and
> http://linux-xfs.sgi.com/projects/xfs/mail_archive/200202/msg00472.html
> 
> and perhaps fixed here:
> http://marc.theaimsgroup.com/?l=linux-raid&m=106661294929434
> 
After reading some more, a few things clicked and I worked it out:

	# allow old PE size of 32m instead of 4m
	vgcreate -s 32m vg0 /dev/md5

	# specify a smaller stripe unit for a raid device
	# from man mkfs.xfs
	mkfs.xfs -l su=32k /dev/vg0/video

I also had a difficult time convincing lvm to use raid devices. In the
end, I had to change /etc/lvm/lvm.conf as follows:
    # By default, LVM2 will ignore devices used as components of
    # software RAID (md) devices by looking for md superblocks.
    # 1 enables; 0 disables.
    md_component_detection = 1
}

and the key to avoid the duplicate PV messages:
    # Exclude the cdrom and all disk partitions, only raid devices here
    filter = [ "r|/dev/cdrom|", "r|/dev/hd.*|" ]

>From then, things went smoothly:
pvdisplay
  --- Physical volume ---
  PV Name               /dev/md5
  VG Name               vg0
  PV Size               692.84 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              22171
  Free PE               8987
  Allocated PE          13184
  PV UUID               WjSblK-6c2d-w5ch-q4PP-K7Sd-rt14-Apa5M9




Reply to: