[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Strange LVM on RAID Behaviour with Sarge



Lucas Barbuto wrote:
> ...
> I get this error at the top of dmesg (repeated many times):
>
>> devfs_mk_dir: invalid argument.<4>devfs_mk_dev: could not append to parent for /disc

I saw the same error message on a sarge installation with a 2.6.8-10 kernel, but not the 2.4.27 kernel.

Recompiling the 2.6.8 kernel without devfs makes the problem go away.
ie .config reads:
	# CONFIG_DEVFS_FS is not set

I also rebuilt initrd.gz (with lvm2create_initrd.sh, see http://www.poochiereds.net/svn/lvm2/lvm2create_initrd) because the error seemed to relate to devfs initialisation at that stage of boot.

Leni.

Lucas Barbuto wrote:
Hi All,

Can someone explain this strange LVM2 on RAID-1 behaviour?

I've recently made a fresh install of Sarge using a recent
debian-installer (more recent than RC2).  I've got two 80GB SATA drives.
 I've partitioned them as follows:

   8     0  117220824 sda
   8     1      32098 sda1
   8     2   39062047 sda2
   8     3   39062047 sda3
   8     4   39062047 sda4
   8    16  117220824 sdb
   8    17      32098 sdb1
   8    18   39062047 sdb2
   8    19   39062047 sdb3
   8    20   39062047 sdb4

With software RAID-1 between each of these partitions (sda1 + sda2 = md0
and so on).  /dev/md0 (small) is for /boot.  /dev/md1 and /dev/md2 are
combined into a LVM2 volume group (vg0) and hold my other system
partitions including the root partition.  Another volume group (vg1)
fills /dev/md3, I plan to use it for backup, or just as some space that
I can use to grow my other partitions as needed.

So everything installed fine with the 2.6 kernel (2.6.8-1-386) booting
off RAID-1, all partitions except /boot in logical volumes.

I get this error at the top of dmesg (repeated many times):

devfs_mk_dir: invalid argument.<4>devfs_mk_dev: could not append to parent for /disc

I don't know if that means anything... yesterday, I added another IDE
device to the system and it showed up as /dev/hdd as I expected and I
formatted it and copied some backup files onto it.  Then I noticed that
/dev/md3 was running in degraded mode, missing /dev/sda4.  I was unable
to hot-add the device back in:

lucas@saturn:~$ sudo mdadm /dev/md3 -a /dev/sda4
mdadm: hot add failed for /dev/sda4: Invalid argument

And I get this message on the console:

md: trying to hot-add unknown-block(8,4) to md3 ...
md: could not lock unknown-block(8,4).

I did a lot of googling but didn't turn up much, except that maybe
something else could be accessing the device?  In the past I'd had an
experience where the RAID hadn't started properly and LVM2 had started
using the RAID member device as it's physical volumes.  This didn't
really make sense but I thought I'd try it.  So I deleted my logical
volume off vg1 and removed vg1 and magically, mdadm let me hot-add
/dev/sda4 back in and happily started syncing it up... so I recreated
the volume group and a logical volume and formatted and mounted it, all
seemed to work fine... but I rebooted and /dev/md3 was back in degraded
mode... and there was gnashing of teeth.

So now I am stuck.  Anyone?  I thought I understood enough about RAID-1
and LVM2 but perhaps not.  So is the LVM interfering by starting up
before the md devices are ready?  This only happens with md3, the others
are working fine.

Thanks,

--
Lucas




Reply to: