[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: problem installing Lenny

Hash: SHA1

Bernard wrote:
> Hi there,
> Since I was unable to recompile my old kernel 2.6.20 under Debian Sarge,
> I decided to install Lenny. Unable to find a way to just upgrade (there
> has been Etch in between), 

Why not just upgrade to etch (aka 'oldstable'), followed by an upgrade
to lenny?

>                            I just saved directories and installed from
> the iso image 
> debian-502a-i386-netinst.iso
> I choose somewhat automated install. I left partitions like they were
> under Sarge, or, if I did change something at that point, I can't
> exactly remember what. I must say that the partitioning menu seemed very
> confusing, but I admit that I knew very little about my raid1 system.

The installation guide [1] contain a very informative section on

[1] http://www.debian.org/releases/stable/i386/ch06s03.html.en#di-partition

> In any case, the Lenny install that I now get, shows defaults. 
> mdadm monitoring keeps sending mails at each boot :
> 'A degradated event has been detected on md device /dev/md0
> P.S. The /proc/mdstat file currently contains the following :
> Personalities: [raid 1]
> md1: active raid 1 sda2[0] sdb2[1]
> 67874550 blocks [2/2] [UU]
> md0: active raid 1 sda1[0]
> 497856 blocks [2/1] [U_]
> unused devices : none
> '

I guess that you just have to add sdb1 to your raid md0. Check the
output of 'mdadm --detail /dev/md0'.
- From there, I'd read 'man mdadm' and either try to autoassemble 'mdadm
- --assemble --scan' or to add the missing spare by hand '--manage -a'.

Take care. I assume that you have good backups.

> a 'df' does confirms the diagnostic above : if md1 is OK with two
> mirrored partitions sda2 and sdb2, md0 only has sda1, while there was
> sda1 and sdb1 on my former Sarge system.
> Ever since I detected such anomaly, I re-installed once more, but I must
> have missed something on the partitioning menu, and I have not found
> what I should change there.

You did not provide enough information to be certain, but I *guess* that
you might have accidentally started the raid with one of the partitions
missing at one point in the past. That led to both partitions not being
in sync any more. Now mdadm can't continue to use both partitions at
once, unless you tell it to sync one of the drives to the state of the
other one.

Good luck,
Version: GnuPG v1.4.9 (GNU/Linux)


Reply to: