[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Upgrading from Lenny to Squeeze



Marc Shapiro wrote:
> I am not sure of the timing, but I think it was right after
> upgrading the kernel that my reboot started having issues.  After it
> mounts the root partition, which it does without problems, it
> complains that it can not stat the swap partition, which is on LVM.
> If I just press <ENTER> to continue, it goes on just fine and
> initializes the swap and mounts all of my other partitions which are
> also on LVM in the same volume group.

For debugging I would remove or comment out the swap line from
/etc/fstab.  Just for debugging.  I would put it back later.  But then
it won't try to mount it at boot time.  Then after booting you can add
the swap line back into /etc/stab and activate it manually using
'swapon -a' and see any error messages directly.  If things were
really in a messed up state such as the swap header getting mangled
such that it isn't identified as swap then you could recreate it with
'mkswap' again.  But if it got mangled that would be bad since you
would want to know what caused it to get mangled and if so what else
got mangled.

> Right after mounting all of the partitions that it should, it then
> tries to mount all of my removable drives from /etc/fstab.  All of
> those are are marked noauto, so they should not be getting
> mounted. Since there is nothing there to mount, I get dropped into a
> shell to fix things.  Since none of those drive SHOULD be mounted I
> just Ctl-D to get out and the boot process continues on to the end
> successfully.

For debugging I would be inclined to save those off elsewhere, then
remove them from the file.  Then you will not be depending upon the
noauto flag.  Should work either way.  I have never had that problem.
But something is not happy.  Simplify things to converge on the cause.

You mentioned lvm.  I worry that perhaps your lvm upgrade to lvm2
didn't go smoothly.  It has always worked smoothly and trouble free
for me and so I have no debug hints there.  But I would look very
closely at that subsystem to make sure it is in a happy state.  I
worry that it might be causing trouble.

Being as the lvm drivers are needed in the initrd I would make sure
that the initrd is rebuilt, possibly again, after making sure that lvm
is happy.  You can do this with dpkg-reconfigure like this:

  # dpkg-reconfigure linux-image-2.6.32-5-686

That should rebuild the initrd for that kernel image.  That should
always be safe.

Note that if you are using mdadm raid then there is a change in
Squeeze that partitions marked autoraid are no longer automatically
assembled into a raid.  Apparently for the good reason that people
trying to recover a corrupted set of disks didn't want the system
automaticaly assembling them and probably mangling a crashed disk
system.  So now they have to be explicitly listed in the mdadm.conf
file and that file must also be current in the initrd.  Rebuilding the
initrd as above will update it from the current mdadm.conf contents.

> I got lots of warnings about lines in /var/lib/dpkg/status having an
> invalid character in it.

There have been some complaints about status file problems in recent
memory on this mailing list.  Here is one recent thread:

  http://lists.debian.org/debian-user/2011/10/msg01569.html

Bob

Attachment: signature.asc
Description: Digital signature


Reply to: