[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Further installation woes WAS: trouble installing RAID and LVM.



On Mon, Feb 13, 2006 at 11:15:25PM -0500, hendrik@topoi.pooq.com wrote:
> On Tue, Feb 14, 2006 at 01:13:01AM +0000, Peter Colton wrote:
> > On Monday 13 February 2006 20:29, hendrik@topoi.pooq.com wrote:
> > > On Sun, Feb 12, 2006 at 04:24:44AM +0000, Peter Colton wrote:
> > > > On Saturday 11 February 2006 14:09, hendrik@topoi.pooq.com wrote:
> > > > > While installing debian-testing-amd64-netinst (downloaded 2006 02 03)
> > > > > When I got to the point where I get to select "configure software RAID"
> > > > > I am told,
> > > > >                        [!!] pARTITION DISKS
> > > > > Before RAID can be configured, the changes have to be written
> > > > >  .....
> > > > > The partition table is the following devices are changed
> > > > >           RAID device #0
> > > > > Write the changes to the storage devices and configure RAID?
> > > > >
> > > > > I choose Yes, and am told,
> > > > >
> > > > > The kernel was unable to re-read the partition table on /dev/md/0
> > > > > (Invalid argument).  This means Linux won't know anything about the
> > > > > modifications you made unito you reboot.  You chould reboot your
> > > > > computer before doing anything with /dev/md/0
> > > > >
> > > > > Well, rebooting restarts the install, which just gets me to the same
> > > > > point.  It does seem to recognise my RAID, by the way.  That had been
> > > > > set up ages ago.  It just doesn't seem to get past the above issues.
> > > > >
> > > > > If I ignore the lamentations and continue anyway, 	 can't get
> > > > > any further.  My next step is to configure LVM on that RAID drive.
> > > > > I can't get anywhere with that.  I have an existing LVM partition
> > > > > (111G) on the RAID drive from an earlier practice intall, and I want it
> > > > > deleted.  But it refuses to do that.
> > > > >
> > > > > Somehow I suspect it is mishandling the RAID in some subtle way, and
> > > > > possibly finding LVM information on the constituent partitions instead
> > > > > of on the proper RAID device.  But I could be wrong.
> > > > >
> > > > > It's conceivable that this problem is AMD64 specific, but nothing I've
> > > > > seen so far suggests that -- otherwise I'd have posted this to an AMD
> > > > > mailing list.
> > > > >
> > > > > -- hendrik
> > > >
> > > > 	Hello hendrik,
> > > >
> > > > 	The link below should be of help to you. Its a howto for setting up
> > > > mirror raid 0 but its how you start the install with the sarge installer
> > > > that should be of intrest to you.
> > > >
> > > > http://nepotismia.com/debian/raidinstall/part1.html
> > > >
> > > > 	Start the install with the expert26 option and then pick the md module
> > > > for the a raid enabled kernel.
> > >
> > > I'll have to do it again with the printouts of that page beside ne to make
> > > sure, but to the best of my memory, I did install etch in expert mode, and
> > > I did ask for the md installer component.  I also asked for the lvm
> > > component.
> > >
> > > -- hendrik
> > >
> > > >                Regards
> > > >
> > > >                     peter colton
> > 
> > 	Hello hendrik,
> > 
> > 	I think you will need to install the lvmcfg module at the start of a expert 
> > install and I would say allso the md module. It the raid 1 method that I am 
> > use to, not lvm.
> 
> Yes.  That's what I did.  I do include the lvmcfg module.  I use raid1, 
> then I specify that the raid1 volume is to be used as a physical volume 
> for LVM.  But I suspect something is wrong with the way my hard disks 
> are set up, and it's interfering with the installation -- as if it is 
> reading inconsistent information from a previous LVM installation.
> 
> I have previously istalled partitions with the i386 sarge, and the 
> AMD64 sarge.  I followed instructions on the web -- well after 
> installation -- about setting up the RAID1 and the LVM.  They seemed 
> to work fine until I rebooted.  I suspect I did not do something 
> right, because when I tried changing the LVM setup it would complain 
> there were two logical volumes with the same name.  My guess it that 
> it had recognised the logical volumes *before* recognising RAID (in 
> fact, it may never had recognised the RAID), and, of course, both 
> hda3 and hdb3 (which I have now moved physically to hdc3) had 
> identical contents.
> 
> So now the two partitions making up the RAID may have different and 
> inconsistent LVM partitioning information.  Even if I delete the 
> RAID-related partitions, and then reconstruct them, the newly created 
> RAID seems to have LVM stuff already there.   But it doesn't appear to 
> be usable.
> 
> -- hendrik

Still no joy.  Carefully followed the instructions on that web page, and 
got nowhere -- at least, the same stuff happened as last time -- 
inability to reread the partition table, and later, inability do deal 
with logical volumes.  Yes, I included the installer modulers for lvm 
and raid, and for good measure, also the one for mouse-usage, and evms 
(just in case)

But I continued the install anyway.  I was installing the system to 
/dev/hda1, which was not part of a RAID or a logical volume.  It went
nicely.  I had it install grub, uneventfully.  When I had it install 
lilo to boot from /dev/fd0, however, it complainmed that that wasn't a 
hard disk.  Really, I think I should have been able to boot from a 
floppy.  So I had it install lilo bootine from /dev/hda, which created 
an appropriate configuration file (which I would be able to edit later), 
and then had it intall grub to /dev/.hda, which worked.  I've discovered 
in the past it's good to have as many different ways to boot as 
possible.

The reboot into the newly installed system went smoothly.  But then came 
the time for package-selection.  Even choosing just a minimal system, it 
complained about:

* E: Unable to correct problems, you have held back broken packages.
* E: Unable to correct dependencies, some packages cannot be installed
* E: Unable to resolve some dependencies
*
* Some packages had unmet dependencies.  This may mean that you have
* requested an impossible situation or if you are using the unstable
* distribution tht some requested packages have not yet been created
* or moved out of Incoming.

* The following packages have unmet dependencies
*   python-minimal: Depends: python 2.3 (>2.3.5-1) which is a virtual 
package
*   report-bug: Depends: pythin 2.3 which is a cirtual package
*   locales: Conflicts: base-config but 2.76 is to be installed
*   python: Depends: puthon2.3 (>=2.3.5-1) which is a virtual package
* tasksel: aptitude failed
* Press Enter to continue

I had specified manual package selection, but never got the opportunity 
to manually select any packages.  I got past this stage by removing the 
asterisk from beside "minimal system", thereby asking it to install 
*nothing*, not even a minimal system.

Once I actually got to log in as root, I edited /etc/lilo.conf and 
got the system to boot proberly from a floppy.  Now I could boot from 
floppy as well as from /dev/hda.

I then ran aptitude, intending to try to fix things up.  Now, however, 
the complaints were completely different.

I did a U, to upgrade to current versions of everything, and then 'g', 
and in the list of proposed actions, was told that aptitude wanted to 
delete
  base-config
  libapt-pkg-perl
  localization-config

Investigating, I was told
  base-config is empty, no longer used, can be deleted without harm
  libapt-pkg-perl was unavailable
  localization-config depended on lipapt-pkg-perl, so broken.

Clearly a few things need to improved before etch goes stable....

In a few days, I will try again, this time setting up a smaller RAID 
partition for immediate use, leaving the bulk of the hard disk free for 
later installation of RAID and LVM when I figure out what's been going 
so wrong ... or else that I learn how to set up a RAID drive inder LVM 
in such a way that everything is recognised upon reboot.

I suspect I had trouble with RAID-nonrecognition in the past when I was 
still trying to install sarge...  That time I set up the RAID1 and 
LVM by hand.  I suspect that on reboot the RAID was not recognised, but 
the (now duplicate) LVM were -- at least the first one was, and the 
second was considered invalid because of name duplications.  When I 
tried to reconfigure LVM, only one copy was, presumably, changed, 
setting the stage for trouble ever since.  Perhaps I should do a 
destructive bad-block check on /dev/hda3 and /dev/hdc3 just to clear 
everything out properly.  WOuld that do the trick, or is there stll 
information elsewhere I need to clean out to be able to start over?

By the the way, what is the RAID partition I built on /dev/hda3 and 
/dev/hdc3 likely to be called?

-- hendrik



Reply to: