Bug#615074: installation-reports: LVM partitions unusable during lenny->squeeze upgrade
On 26.02.11 Ferenc Wagner (firstname.lastname@example.org) wrote:
> Hilmar Preusse <email@example.com> writes:
> > On 25.02.11 Ferenc Wagner (firstname.lastname@example.org) wrote:
> >> Hilmar Preusse <email@example.com> writes:
> > I'm pretty sure there are none. I can provide the lvm config
> > before and after the upgrade if this is helpful.
> Only if you got a conffile conflict during the LVM upgrade.
> Otherwise you must be using the default filter, which allows hda
> and sda alike.
No, there were none. I never touched the lvm config manually.
> >> I wonder why it didn't happen in your case... Wasn't this a
> >> problem with missing /dev/mapper/* nodes instead?
> > However right after the first reboot all view commands
> > (vgdisplay, pvdisplay etc.) complained that some PVs having
> > specific UUIDS could not be accessed/listed(?).
> This suggests that *some* PVs must have been available, as the VG was
> recognized. Is it possible that after the first reboot you somehow
> temporarily lost some PVs (but not all)?
I don't really remember exactly. But I *believe* that pvdisplay
reported *all* PVs to be unavailable/not accessible. And yes,
vgdisplay too reported the PVs/VGs(?) to be unavailable/not
accessible (all IIRC, the screen output is lost).
Right now I upgraded another system from lenny to squeeze, where even
/ is on LVM and had no problems. I can't really tell what is specific
to my system. I'm not sure if we can find out what was going on here.
I simply suggest to add another point to the release notes (right
between 4.4.5. and 4.4.6.): people should check if their lvm based
file systems could be mounted after the reboot. If not the lvm
software should be upgraded manually at this point and all missing
lvm based file systems should be mounted manually.
I know the suggestion comes late, maybe this subsection could be
introduced in a point release of the release notes.