[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#316158: Installation Report - grub install weirdness



Package: installation-reports
Severity: normal

Debian-installer-version: Official Sarge 3.1r0a Business Card
uname -a: 2.6.8-2-686-smp
Date: 28 June 2005
Method: Business Card CD Install

Machine: Dell PowerEdge 2650
Processor: Intel Xeon 2.2 GHz
Memory: 1.25 GB
Root Device: SCSI - Dell PERC 3/Di running RAID5 (aacraid driver)
Root Size/partition table:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/volgroup-root
                      14679612    352104  14327508   3% /
/dev/sda1               248948     40400    208548  17% /boot

Output of lspci and lspci -n:

'lspci':
0000:00:00.0 Host bridge: ServerWorks CMIC-WS Host Bridge (GC-LE
chipset) (rev 13)
0000:00:00.1 Host bridge: ServerWorks CMIC-WS Host Bridge (GC-LE chipset)
0000:00:00.2 Host bridge: ServerWorks CMIC-LE
0000:00:04.0 ff00: Dell Embedded Remote Access or ERA/O
0000:00:04.1 ff00: Dell Remote Access Card III
0000:00:04.2 0c07: Dell Embedded Remote Access: BMC/SMIC device
0000:00:0e.0 VGA compatible controller: ATI Technologies Inc Rage XL
(rev 27)
0000:00:0f.0 Host bridge: ServerWorks CSB5 South Bridge (rev 93)
0000:00:0f.1 IDE interface: ServerWorks CSB5 IDE Controller (rev 93)
0000:00:0f.2 USB Controller: ServerWorks OSB4/CSB5 OHCI USB Controller
(rev 05)
0000:00:0f.3 ISA bridge: ServerWorks CSB5 LPC bridge
0000:00:10.0 Host bridge: ServerWorks CIOB-X2 PCI-X I/O Bridge (rev 03)
0000:00:10.2 Host bridge: ServerWorks CIOB-X2 PCI-X I/O Bridge (rev 03)
0000:00:11.0 Host bridge: ServerWorks CIOB-X2 PCI-X I/O Bridge (rev 03)
0000:00:11.2 Host bridge: ServerWorks CIOB-X2 PCI-X I/O Bridge (rev 03)
0000:03:06.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5701
Gigabit Ethernet (rev 15)
0000:03:08.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5701
Gigabit Ethernet (rev 15)
0000:04:08.0 PCI bridge: Intel Corp. 80303 I/O Processor PCI-to-PCI
Bridge (rev 01)
0000:04:08.1 RAID bus controller: Dell PowerEdge Expandable RAID
Controller 3/Di (rev 01)

'lspci -n':
0000:00:00.0 0600: 1166:0012 (rev 13)
0000:00:00.1 0600: 1166:0012
0000:00:00.2 0600: 1166:0000
0000:00:04.0 ff00: 1028:000c
0000:00:04.1 ff00: 1028:0008
0000:00:04.2 0c07: 1028:000d
0000:00:0e.0 0300: 1002:4752 (rev 27)
0000:00:0f.0 0600: 1166:0201 (rev 93)
0000:00:0f.1 0101: 1166:0212 (rev 93)
0000:00:0f.2 0c03: 1166:0220 (rev 05)
0000:00:0f.3 0601: 1166:0225
0000:00:10.0 0600: 1166:0101 (rev 03)
0000:00:10.2 0600: 1166:0101 (rev 03)
0000:00:11.0 0600: 1166:0101 (rev 03)
0000:00:11.2 0600: 1166:0101 (rev 03)
0000:03:06.0 0200: 14e4:1645 (rev 15)
0000:03:08.0 0200: 14e4:1645 (rev 15)
0000:04:08.0 0604: 8086:0309 (rev 01)
0000:04:08.1 0104: 1028:000a (rev 01)

Base System Installation Checklist:
[O] = OK, [E] = Error (please elaborate below), [ ] = didn't try it

Initial boot worked:    [O]
Configure network HW:   [O]
Config network:         [O]
Detect CD:              [O]
Load installer modules: [O]
Detect hard drives:     [O]
Partition hard drives:  [O]
Create file systems:    [O]
Mount partitions:       [O]
Install base system:    [O]
Install boot loader:    [E]
Reboot:                 [E]

Comments/Problems:

With this install I again ran into the issue where grub gives the
message: "The file /boot/grub/stage1 not read correctly." I've searched
before and dug up discussions where it was suggested that partition
types were set incorrectly, but I verified correct partitioning.

It doesn't seem to occur overly frequently, but I still have seen it a
few times, and others I've talked to mentioned it as well.

In this particular setup, I created a /boot partition at the beginning
(256MB), and a swap partition (512MB) at the end of the drive, and used
the remaining space as a PV for LVM. In the VG, I create two logical
volumes, / and /var. From my basic testing, the only thing that seems to
have an effect is /boot and /. It also appears to be independent of
whether / is a native partition, or an LVM LV.

I noticed that the filesystem on each volume seemed to effect the
success of the grub installation, so I did a number of test installs,
with the following filesystems and results (filesystems indicated are in
the pair (/boot fs, / fs) (installed in expert mode):

Native partitions:
    1. (, reiser) - Success (no seperate /boot - I assume all
filesystems will work in this configuration)

    2. (ext2, reiser)   - FAILED
    3. (ext2, ext2)     - Success
    4. (reiserfs, ext3) - Unknown (The installation hung on this test,
and I never repeated it)

root on LVM:
    5. (ext2, reiser)   - FAILED (After this test I assumed that LVM
was not a factor - that may not be correct, but for this purpose I have
assumed so)
    6. (reiser, reiser) - Success
    7. (ext3, reiser)   - FAILED
    8. (ext3, ext3)     - FAILED
    9. (ext2, ext3)     - FAILED

    (The rest of the tests were to test using non-ext/reiser
filesystems, I don't have enough experience with grub on XFS/JFS to know
if they're useful)
   10. (XFS, JFS)      - FAILED
   11. (XFS, XFS)      - FAILED
   12. (JFS, JFS)      - FAILED

My initial wish was to use reiserfs on /var and /, and ext2 on /boot. I
found it interesting that (ext2, ext2) worked, while (ext3, ext3) did
not. I also tried combinations of reiser and ext2 to no avail.

Any ideas what could be the cause of the grub installation failures?

Thanks,
Joel Johnson




Reply to: