[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mounting LVM partitions fails after etch upgrade



> yes, / on md0 does get fsck'd cleanly, whether in single boot or
> 'normal' boot. I can get into a root shell w/o any filesystem related
> errors.

Good.  Now if only debian's single-user mode didn't start all kinds of
extras that need /usr and /var....

> the problem are all other mounts, which reside on LVM on md1. fsck
> tells me that there are hundreds of inodes with thousands of illegal
> blocks.
>
> I never had any problems related to fs corruption, and I don't see how
> a simple system upgrade could cause this. so, I'm still thinking that
> something with the raid or lvm setup is screwed, but I don't know what
> or why.
>
> as you can probably tell I have never dealt with fixing a broken fs,
> but I'm afraid that running e2fsck would completely screw my data.
> what I primarily want is not a fs w/o errors but rescue as much data
> as possible...

You mean you don't have backups?  On which fs is the non-backed-up data?

I do have backups of the most important files on DVDs. everything else
was (partially, as space allowed...) backed up to other logical
volumes, but everything except / was in volumes on the same volume
group on the same disk array, making this pretty useless :(
I never even remotely imagined the possibility of all file systems
becoming corrupt at once. Mainly I'd like back my /home and
/home/vpopmail partitions.

> >Note that e2fsck can take several passes.  Also, you don't want the -a
> >option (which is the backward-compatible version of -p) which exists
> >with error code 4 if a problem would require human intervention, since
> >you are there to intervene and don't want it to exit.

If I remember previous posts in this thread, this all started after an
upgrade and you got an fsck warning that said to run fsck manually and
instead of following fsck's advice, you forced a normal mount of unclean
filesystems.
that was probably not the smartest of possible actions...

 Any data that's been corrupted has probably already been
corrupted.

Boot into single-user mode and ensure that non-root fs are totally
unmounted.  Then run e2fsck -f as many times as it takes to fix.  This
gets the fs into a consistant state but you may have already lost data.

If you had full backups of your data, at this point its probably easier
to reinstall.  Remember that some of the data lost will be debian's,
e.g. corrupted files in /usr/bin.  If it were me, I'd get the partition
that had my data fixed, back up the data, then do a clean install.

what I'll do is the following: I'll rip out one of the mirrors so that
I always have all data in its current state no matter what happens.
then I install a fresh disk, boot the system with some live-cd, copy
an image of the old disk to the new one and see how much I can get out
of it with a tool like e2salvage. from there I'll install a clean
system.

and I'll have a look at alternative file systems.

For your next install, you may want to review the list archives for
threads on the choice of filesystems.  Each (except perhaps reiser now)
have people who swear by them.  Personally, I swear by JFS after bad
experience with reiserfs and an experience similar to yours with ext3
where I _did_ follow the instruction to do a manual fsck; it still hosed
my data.  I had backups.

FS corruption is nasty to put it politely.  You have my sympathies.
Good luck.

thanks for all your help!

cheers,
- Dave.



Reply to: