[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Upgrade to 2.6.32-5-amd64 failing miserably



martin f krafft wrote:

> also sprach lrhorer <lrhorer@satx.rr.com> [2011.02.01.2123 +0100]:
>> far too quickly to be seen, but then an error pops up concerning
>> an address space collision of some PCI device. Then it shows three
>> errors for RAID devices md1. md2, and md3, saying they are already
>> in use.
> 
> This looks like a hardware problem, causing mdadm to fail to
> assemble.

        I guess I did not make myself clear.  The systems boots perfectly, and
continues to do so, under 2.6.32-3-amd64. Since the arrays assemble and
run without error under 2.6.32-3-amd64, then there is no way enough
members of the array could be bad to prevent them from assembling under
2.6.32.3-5-amd64. Not only that, but GRUB would not even come up,
since /boot is /dev/md1, and all three arrays are assembled from the
same two dirves.  'Bottom line, md1, md2, and md3 are just fine.

>> Immediately thereafter the system shows errors concerning  the RAID
>> targets being already in use, after which point the system complains
>> it can't mount / (md2), /dev, /sys, or /proc (in that order) because
>> the sources do not exist (if /dev/md2 does not exist, how can it be
>> busy?). Thereafter, of course, it fails to find init, since / is not
>> mounted. It then tries to run BusyBox, but Busybox complains:
>> 
>> /bin/sh: can't access tty; job control turned off
> 
> This is normal. Don't worry about that.
> 
> Instead, try to assemble the arrays manually. And provide me with
> a lot more information, e.g.

        How could I assemble the arrays manually when the system won't boot? 
They don't need to be assembled manually under 2.6.32-3-amd64, because
they assemble and boot perfectly well automatically in that release.

>   ls -l /dev/md* /dev/[sh]d*
brw-rw---- 1 root disk 3,   0 Feb  2 03:23 /dev/hda
brw-rw---- 1 root disk 3,   1 Feb  2 03:23 /dev/hda1
brw-rw---- 1 root disk 3,   2 Feb  2 03:23 /dev/hda2
brw-rw---- 1 root disk 3,   3 Feb  2 03:23 /dev/hda3
brw-rw---- 1 root disk 3,  64 Feb  2 03:23 /dev/hdb
brw-rw---- 1 root disk 3,  65 Feb  2 03:23 /dev/hdb1
brw-rw---- 1 root disk 3,  66 Feb  2 03:23 /dev/hdb2
brw-rw---- 1 root disk 3,  67 Feb  2 03:23 /dev/hdb3
brw-rw---- 1 root disk 9,   0 Feb  2 03:22 /dev/md0
brw-rw---- 1 root disk 9,   1 Feb  2 03:22 /dev/md1
brw-rw---- 1 root disk 9,   2 Feb  2 03:22 /dev/md2
brw-rw---- 1 root disk 9,   3 Feb  2 03:22 /dev/md3
brw-rw---- 1 root disk 8,   0 Feb  2 03:23 /dev/sda
brw-rw---- 1 root disk 8,  16 Feb  2 03:23 /dev/sdb
brw-rw---- 1 root disk 8,  32 Feb  2 03:23 /dev/sdc
brw-rw---- 1 root disk 8,  48 Feb  2 03:23 /dev/sdd
brw-rw---- 1 root disk 8,  64 Feb  2 03:23 /dev/sde
brw-rw---- 1 root disk 8,  80 Feb  2 03:23 /dev/sdf
brw-rw---- 1 root disk 8,  96 Feb  2 03:23 /dev/sdg
brw-rw---- 1 root disk 8, 112 Feb  2 03:23 /dev/sdh
brw-rw---- 1 root disk 8, 128 Feb  2 03:23 /dev/sdi
brw-rw---- 1 root disk 8, 144 Feb  2 03:23 /dev/sdj

        I don't see the point of this, at all.  There is nothing to guarantee
the drive targets will be the same in the new kernel, or for that
matter from one boot to the next in the old kernel.

>   cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : inactive sdj[9](S)
      1465137560 blocks super 1.2

md3 : active (auto-read-only) raid1 hda3[0] hdb3[1]
      204796548 blocks super 1.2 [2/2] [UU]
      bitmap: 0/196 pages [0KB], 512KB chunk

md2 : active raid1 hda2[2] hdb2[1]
      277442414 blocks super 1.2 [2/2] [UU]
      bitmap: 5/265 pages [20KB], 512KB chunk

md1 : active raid1 hda1[0] hdb1[1]
      6144704 blocks [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

>   mdadm -Es
ARRAY /dev/md1 UUID=4cde286c:0687556a:4d9996dd:dd23e701
ARRAY /dev/md/2 metadata=1.2 UUID=d45ff663:9e53774c:6fcf9968:21692025
name=Backup:2
ARRAY /dev/md/3 metadata=1.2 UUID=51d22c47:10f58974:0b27ef04:5609d357
name=Backup:3
ARRAY /dev/md/0 metadata=1.2 UUID=431244d6:45d9635a:e88b3de5:92f30255
name=Backup:0

        The arrays are fine.  This is not an issue with any of the arrays
themselves.

>   dmesg

        See next message.

> It would really help if you could also enable initramfs debugging
> (http://wiki.debian.org/InitramfsDebug) and provide us with the
> output file.

        Well, this may be getting a bit closer, but still no cigar.  Setting
the "break=premount" kernel parameter causes the kernel to halt booting
and load the BusyBox executable, but since there is still no access to
the tty, the system is once again locked at that point.  I need
something which will allow the me to inspect things - or at least
obtain an output that can be saved - during the boot process.


Reply to: