[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Evms-devel] Re: RFP: EVMS



Matt, 

I've been looking through your info trying to figure out the problem you're 
having. The evms-rediscover.txt messages indicate you have six SCSI disks on 
your system, which EVMS identifies as sd[abdcef]. sda is 4.2 GB, sdb and c 
are 8.4 GB, and sdd, e,  and f are all 4.0 GB.

Then the segment manager (in the kernel), finds 3 partitions on sda, 1 
partition on sdb, 1 on sdc, 2 on sdd, and 1 on sde. It does not find any 
partitions on sdf. Based on your vg_concat.txt messages, it looks like sdf 
should have at least 1 partition, which is used as an LVM PV. Since the 
segment manager in the kernel doesn't find that partition, the LVM region 
manager doesn't have access to that partition to add to the VG. Hence all of 
the weird kernel messages.

However, you said the engine finds everything correctly when you run 
evms_vgscan. Try running the GUI again and see if the user-space segment 
manager is finding all of the segments for sdf. If it is, then there seems to 
be an inconsistency between segment discovery in the kernel and in 
user-space. If we can figure that out and fix it, it should fix your problem.

One thing that should be helpful for debugging is if you could send a copy of 
your engine log. Rerun evms_vgscan with the "-d" option, and then send me a 
copy of /var/log/evmsEngine.log. That should tell us (in excruciating detail) 
what user-space is doing.

Unfortunately, the segment manager kernel messages are not as helpful as they 
could be. Tomorrow I will try to clean up some of the kernel messages so we 
can get a more accurate idea of what is happening on your system.

I'm not sure what to tell you about the modules problem. I'm not sure why the 
kernel faulted when you ran the user-space tools. Currently, if you want to 
build EVMS as modules, you will need an init ramdisk that contains all of the 
required EVMS modules, and all necessary device driver modules.  This is 
necessary because EVMS discovery runs in the kernel before the root fs is 
mounted. If you are interested in more details about EVMS kernel discovery, 
try browsing our archives from a couple weeks ago. Andreas and I were 
discussing the current method, and possible future changes.

If you are inclined, it would be interesting to see if you have the same 
problems if you build EVMS and the SCSI drivers into the kernel instead of as 
modules.

-Kevin



On Monday 31 December 2001 16:15, Matt Zimmerman wrote:
> OK, I now have a functional 2.4.17 with LVM and EVMS.  My setup has a few
> quirks so far:
>
> - I build most things as loadable modules, including one of my SCSI
> drivers, LVM, and the bits of EVMS that support it (I have the MD module
> disabled for now).  Consequently, EVMS doesn't find any devices when it
> starts up. I tried evms_vgscan, and got badness:
>
> Dec 30 20:22:58 mizar Unable to handle kernel paging request at virtual
> address d89142c0 Dec 30 20:22:58 mizar printing eip:
> Dec 30 20:22:58 mizar c01650e8
> Dec 30 20:22:58 mizar *pde = 01636067
> Dec 30 20:22:58 mizar *pte = 00000000
> Dec 30 20:22:58 mizar Oops: 0000
> Dec 30 20:22:58 mizar CPU:    0
> Dec 30 20:22:58 mizar EIP:    0010:[<c01650e8>]    Tainted: P
> Dec 30 20:22:58 mizar EFLAGS: 00010286
> Dec 30 20:22:58 mizar eax: 00000004   ebx: d2427900   ecx: 00000000   edx:
> d89142c0 Dec 30 20:22:58 mizar esi: c682ff38   edi: c682ff68   ebp:
> 00000000   esp: c682ff1c Dec 30 20:22:58 mizar ds: 0018   es: 0018   ss:
> 0018
> Dec 30 20:22:58 mizar Process evms_vgscan (pid: 7153, stackpage=c682f000)
> Dec 30 20:22:58 mizar Stack: c682ff38 bffff92c c01667f8 c682ff38 c682ff5c
> bffff92c c682ff68 00000000 Dec 30 20:22:58 mizar c016380e c682ff5c c00c3f82
> 00000000 bffff920 00000000 bffff920 00000000 Dec 30 20:22:58 mizar 00000000
> 00000000 40015a38 c0164d06 bffff920 d1254ac0 bffff920 c00c3f82 Dec 30
> 20:22:58 mizar Call Trace: [<c01667f8>] [<c016380e>] [<c0164d06>]
> [<c01355c8>] [<c013ba07>] Dec 30 20:22:58 mizar [<c0106c4b>]
>
>   I got things to work by using the little evms_rediscover program from the
> uml tarball on sourceforge, after which evms_vgscan seems happy.
>
> - It only seems to find one of my LVM volume groups (see attached
>   evms-rediscover.text and vg_concat.text).
>
> - Within that group, apparently because it doesn't recognize one of my PEs
>   (#4, scsi/host1/bus0/target5/lun0/part1), causing lots of messages about
>   incomplete LE maps, though LVM is happy with everything.  I think it's
>   being referred to as sdf, but I've been using devfs names on this system
>   through several hardware reconfigurations, so I'm not sure where it falls
>   in the order now.
>
> - After running evms_rediscover, when I run evms_vgscan, it sees both of
>   them:
>
> mizar# evms_vgscan
> evms_vgscan -- reading all physical volumes (this may take a while...)
> evms_vgscan -- Found active volume group "lvm/vg_concat"
> evms_vgscan -- Found active volume group "lvm/vg_raid1"
>
>   but there is still only one under /dev/evms/lvm (I'm using DevFS):
>
> mizar:[~] ls -R /dev/evms
> /dev/evms:
> block_device  lvm  sda1  sda2  sda3  sdd2
>
> /dev/evms/lvm:
> vg_concat
>
> /dev/evms/lvm/vg_concat:
> backup  debian-cvs  hercules  music  netcache  tmp
> mizar:[~]
>
>   Both volume groups are visible from the GUI, though, oddly enough.



Reply to: