[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Connot load Wheezy in a "virgin" desktop -- long



On Mon, Jan 06, 2014 at 06:25:24PM -0500, Ken Heard wrote:
> I apologize in advance for the length of this post.  Since however I do not
> know what information is necessary to determine why this installation
> failed I am including everything which I have the least suspicion may be
> contributing to the failure.

Yay!
 
> So I suppose the real questions at this point are the following.  What
> purpose does this file serve?   Is the invalidity of the DMAR referred to
> in the "WARNING" line above sufficient to cause the DE not to load?

Nope. You can ignore this.


> 
> The other part of the dmesg output which concerns me follows.
> ---------------------------------------------------------------------------
> 1.240960]  sdb: sdb1 sdb2
> [    1.241103] sd 1:0:0:0: [sdb] Attached SCSI disk
> [    1.260609]  sda: sda1 sda2
> [    1.260755] sd 0:0:0:0: [sda] Attached SCSI disk
> [    1.593645] md: md0 stopped.
> [    1.594503] md: bind<sdb1>
> [    1.594659] md: bind<sda1>
> [    1.595242] md: raid1 personality registered for level 1
> [    1.595394] bio: create slab <bio-1> at 1
> [    1.595484] md/raid1:md0: active with 2 out of 2 mirrors
> [    1.595541] md0: detected capacity change from 0 to 248315904
> [    1.596423]  md0: unknown partition table
> [    1.683228] Refined TSC clocksource calibration: 3392.144 MHz.
> [    1.683278] Switching to clocksource tsc
> [    1.797451] md: md1 stopped.
> [    1.797959] md: bind<sdb2>
> [    1.798118] md: bind<sda2>
> [    1.798620] md/raid1:md1: not clean -- starting background reconstruction
> [    1.798673] md/raid1:md1: active with 2 out of 2 mirrors
> [    1.798731] md1: detected capacity change from 0 to 1499865088000
> [    1.806447]  md1: unknown partition table
> [    1.999928] device-mapper: uevent: version 1.0.3
> [    2.000006] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised:
> dm-devel@redhat.com
> [    2.195467] EXT4-fs (dm-0): INFO: recovery required on readonly
> filesystem
> [    2.195518] EXT4-fs (dm-0): write access will be enabled during recovery
> [    2.263170] md: resync of RAID array md1
> [    2.263216] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> [    2.263264] md: using maximum available idle IO bandwidth (but not more
> than 200000 KB/sec) for resync.
> [    2.263330] md: using 128k window, over a total of 1464712000k.
> [    2.277910] EXT4-fs (dm-0): recovery complete
> [    2.319337] EXT4-fs (dm-0): mounted filesystem with ordered data mode.
> Opts: (null)
> --------------------------------------------------------------------------
> The lines above which i do not understand are 1.596423 and 1.806447, both
> of which say that the system is not aware of partition tables for md0 and
> md1.  Both are part of  a RAID1; md0 contains only the /the /boot
> partition, which happens to be empty because the boot loader is in the MBR;
> and md1 is the only physical volume in LVM volume group TH1.  All the other
> partitions are logical volumes in that volume group.
> 
> The following quote neither comes from the output of dmesg nor is part of
> syslog.  Instead it appears at the end of the information which scrolls by
> on the screen as part of the boot process.
> --------------------------------------------------------------------------
> [ ok ] setting up LVM Volume Groups ... done.
> [ .... ] Starting remaining crypto disks .... [info] TG1-swap_crypt
> (starting) ... TG1 -swap_crypt (started) ... TG1-swap_crypt (running) ...
> [info] TG1-tmp_crypt (starting) ...
> [  ok  mp_crypt (started ) ... done.  {sic}
> [ ok ] Activating lvm and md swap ... done.
> [....]  Checking file systems ... fsck from util-linux 2.20.1
> fsck.ext4: Unable to resolve 'UUID=a5fdb692-2b34-4e18-8fd5-c24dde957071'
> fsck.ext4: No such file or directory while trying to open
> /dev/mapper/TH1-ken
> Possibly non-existent device?
> fsck.ext4: No such file or directory while trying to open
> /dev/mapper/TH1-martin
> Possibly non-existent device?
> fsck.ext2: No such file or directory while trying to open
> /dev/mapper/TH1-tmp_crypt
> Possibly non-existent device?
> fsck.ext4: No such file or directory while trying to open
> /dev/mapper/TH1-var
> Possibly non-existent device?
> fsck died with exit status 8
> failed (code 8).  {code 8 means "an operational error"  -- my comment.}
> [....]  File system check failed.  A log is being saved in
> /var/log/fsck.checkfs if
> [FAIL] the location is writable.  Please repair the file system manually.
> ... failed!
> [....] A maintenance shell will now be started.  CONTROL-D will terminate
> this [warning] shell and resume system boot. ... (warning).
> Give root password for maintenance
> (or type Control-D to continue):
> ----------------------------------------------------------------------------
> I am reasonably certain that this failure is the main -- possibly the only
> -- reason for failure of the boot process to complete and install the DE.
> I am also at a loss as to how to fix it.  The /etc/fstab file shows those
> four partitions -- with file type ext4 -- are mounted in accordance with
> the partitions created during installation.  The output of command blkid
> also shows correctly the same information.  In maintenance mode I was able
> to access all the "failed" mount points and write files to them.

Please show:

cat /etc/fstab

cat /etc/mdadm.conf

pvdisplay

vgdisplay

lvdisplay

My theory is that you have leftover LVM configs that your
boot-time fsck is finding and complaining about.

-dsr-


Reply to: