[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

XFS + LVM + RAID10 = raid0_make_request bug



I've seemingly hit a problem trying the combination of xfs on lvm on
raid10 (software)

I've currently got 2 disks running (hda/hdc) and I've added two more
(hdi/hdk).

The running disks are configured as 2g ext3 root raid1 (hda1/hdc1-md0), 1g swap
raid1 (hda2/hdc2-md1) and the rest as LVM partitions (hda3/hdc3). The
LVM contains /usr, /var, and three other mounts, only one that crossed
into the /dev/hdc3 PV.

I had one disk fail, hdc, and I've gotten the replacement. The raid1
partitions were simple, but of course, LVM doesn't like disk failures. I
used dd_rescue to make a copy of hdc3, and that seems to work. I get
some xfs errors, and xfs_repair fails, but I'll deal with corrupt files
later.

I added the two other disks (same size) and partitioned them the same
except for the 3rd partition which I set to raid partitions.

My intention was to build a raid10 array. I made hdi1 a spare for md0,
and hdi2 a spare for md1. I made hdi3 and hdk3 a raid0 md2. I then made
md4 out of md2 and missing (md3 will be hda3 and hdc3). All fine.
/proc/mdstat looks good, md4 is in degraded mode. The next step was
supposed to be make a fs on the lvm on raid10, copy what I can from the
corrupt old lvm partition, and then reformat hda3/hdc3 as raid devices
and raid 0 them (md3) and then add them into the raid1 (md4). First
problem, I had to reset the md detection flag in lvm.conf (lvm2). I
eventually was able to pvcreate, vgcreate, and lvcreate. When I went to
mkfs.xfs the new logical volume, I ran into:

raid0_make_request bug: can't convert block across chunks or bigger than 64k


google reveals that error during kernel 2.5.x -> 2.6.x, back in 2003. It
seems to have been a problem with xfs on top of software raid0.

http://www.ussg.iu.edu/hypermail/linux/kernel/0310.2/0982.html
and
http://linux-xfs.sgi.com/projects/xfs/mail_archive/200202/msg00472.html

and perhaps fixed here:
http://marc.theaimsgroup.com/?l=linux-raid&m=106661294929434


I'm running a mostly sarge (2.4.27-k7) system. I tried different PE
size args to vgcreate, the default 4m and 32m. I'm just not sure what to
try next. I came across some links that suggested not using xfs, but I'd
like to know if that is the only solution, or did I make some mistake.


Thanks

(most of this is from memory, so if exact logs/status/commands are
needed, I can provide)




Reply to: