[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#446323: marked as done (mdadm: recovery in infinite loop)



Your message dated Mon, 9 Feb 2009 20:22:19 +0100
with message-id <20090209192219.GB5811@piper.oerlikon.madduck.net>
and subject line Re: Bug#446323: mdadm: recovery in infinite loop
has caused the Debian Bug report #446323,
regarding mdadm: recovery in infinite loop
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
446323: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=446323
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: mdadm
Version: 2.5.6-9
Severity: normal

Hello,
I am trying to setup raid1 on my pc.
I have 2 identical drives. 270+gb. each with 3 partitions. 
30gb hdb1   -> md0
250gb hdb2   ->md2
4gb swap hdb5  ->md4

Initially my raid had only one drive. I have added the second one with 
mdadm --add /dev/md2 /dev/hda1 then 2 then 4

It started doing recovery for drives. IT finished for md0,md4 but for
md2 it is in infinite loop. IT goes to 15 % and starts again


hplinux:/home/lucas# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
md4 : active raid1 hda5[0] hdb5[1]
      1951744 blocks [2/2] [UU]

      md2 : active raid1 hdb2[1] hda2[2]
            276438400 blocks [2/1] [_U]
	          [>....................]  recovery =  2.8%
		  (7898752/276438400) finish=2448.2min speed=1826K/sec

		  md0 : active raid1 hda1[0] hdb1[1]
		        34178176 blocks [2/2] [UU]

			unused devices: <none>


Oct 11 19:26:50 hplinux kernel: md: md2: sync done.
Oct 11 19:26:51 hplinux kernel: md: syncing RAID array md2
Oct 11 19:26:51 hplinux kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Oct 11 19:26:51 hplinux kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Oct 11 19:26:51 hplinux kernel: md: using 128k window, over a total of
276438400 blocks.
Oct 11 19:32:21 hplinux kernel: md: md2: sync done.
Oct 11 19:32:21 hplinux mdadm: RebuildFinished event detected on md
device /dev/md2
Oct 11 19:32:21 hplinux kernel: md: syncing RAID array md2
Oct 11 19:32:21 hplinux kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Oct 11 19:32:21 hplinux kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for reconstruction.
Oct 11 19:32:21 hplinux kernel: md: using 128k window, over a total of
276438400 blocks.
Oct 11 19:32:21 hplinux mdadm: RebuildStarted event detected on md
device /dev/md2
Oct 11 19:36:16 hplinux kernel: md: md2: sync done.
Oct 11 19:36:17 hplinux kernel: md: syncing RAID array md2

why is this appening? I have checked both partitions for bad blocks with
e2fsck.

Is there something I am missing?
The loops just goes on and on. ITs been 72h so I don't know how much
longer with the drives survive this 20mb/s read and write.
Thanks,
lucas

-- Package-specific info:
--- mount output
/dev/md0 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /files type ext3 (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

--- mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Fri, 05 Oct 2007 21:54:54 -0500
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=8f8b90a7:642e9c4d:b1274a75:6a339511
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=6934dbf9:dfe9d7d1:b1274a75:6a339511
ARRAY /dev/md4 level=raid1 num-devices=2 UUID=967e922e:2688de6a:b1274a75:6a339511

--- /proc/mdstat:
Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] 
md4 : active raid1 hda5[0] hdb5[1]
      1951744 blocks [2/2] [UU]
      
md2 : active raid1 hdb2[1] hda2[2]
      276438400 blocks [2/1] [_U]
      [>....................]  recovery =  2.4% (6757632/276438400) finish=2488.6min speed=1805K/sec
      
md0 : active raid1 hda1[0] hdb1[1]
      34178176 blocks [2/2] [UU]
      
unused devices: <none>

--- /proc/partitions:
major minor  #blocks  name

   3     0  312571224 hda
   3     1   34178256 hda1
   3     2  276438487 hda2
   3     3          1 hda3
   3     5    1951866 hda5
   3    64  312571224 hdb
   3    65   34178256 hdb1
   3    66  276438487 hdb2
   3    67          1 hdb3
   3    69    1951866 hdb5
   9     0   34178176 md0
   9     2  276438400 md2
   9     4    1951744 md4

--- initrd.img-2.6.18-4-486:
21377 blocks
etc/mdadm
etc/mdadm/mdadm.conf
scripts/local-top/mdadm
sbin/mdadm
lib/modules/2.6.18-4-486/kernel/drivers/md/multipath.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/raid0.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/raid1.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/raid10.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/md-mod.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/linear.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/raid456.ko
lib/modules/2.6.18-4-486/kernel/drivers/md/xor.ko

--- /proc/modules:
dm_snapshot 15644 0 - Live 0xe0c59000
dm_mirror 18000 0 - Live 0xe0c48000
dm_mod 48952 2 dm_snapshot,dm_mirror, Live 0xe0c63000
raid0 7808 0 - Live 0xe082e000
raid456 113168 0 - Live 0xe08c0000
xor 14344 1 raid456, Live 0xe0857000
raid1 19968 3 - Live 0xe0851000
md_mod 67860 5 raid0,raid456,raid1, Live 0xe0890000

--- volume detail:

--- /proc/cmdline
root=/dev/md0 ro 

--- grub:
kernel		/boot/vmlinuz-2.6.18-4-486 root=/dev/md0 ro 
kernel		/boot/vmlinuz-2.6.18-4-486 root=/dev/hda1 ro 
kernel		/boot/vmlinuz-2.6.18-4-486 root=/dev/hda1 ro single


-- System Information:
Debian Release: 4.0
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: i386 (i686)
Shell:  /bin/sh linked to /bin/bash
Kernel: Linux 2.6.18-4-486
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)

Versions of packages mdadm depends on:
ii  debconf [debconf-2.0]       1.5.11       Debian configuration management sy
ii  libc6                       2.3.6.ds1-13 GNU C Library: Shared libraries
ii  lsb-base                    3.1-23.1     Linux Standard Base 3.1 init scrip
ii  makedev                     2.3.1-83     creates device files in /dev

Versions of packages mdadm recommends:
ii  module-init-tools             3.3-pre4-2 tools for managing Linux kernel mo
ii  postfix [mail-transport-agent 2.3.8-2+b1 A high-performance mail transport 

-- debconf information:
* mdadm/autostart: true
  mdadm/mail_to: root
  mdadm/initrdstart_msg_errmd:
* mdadm/initrdstart: all
  mdadm/initrdstart_msg_errconf:
  mdadm/initrdstart_notinconf: false
  mdadm/initrdstart_msg_errexist:
  mdadm/initrdstart_msg_intro:
  mdadm/autocheck: true
  mdadm/initrdstart_msg_errblock:
  mdadm/start_daemon: true



--- End Message ---
--- Begin Message ---
also sprach Lukasz Szybalski <szybalski@gmail.com> [2009.02.09.0213 +0100]:
> Since it seems as hdc has some issues, I know I can copy stuff
> manually from hdc2 to hda2.
> 
> cp -dpRx /files /mnt/hda2
> 
> How do I make the hdc2 be removed from md2 (md2 will be empty then)
> then add hda2 as primary/first hard drive for md2 partition and then
> sync hdc2 to it.?

You don't need to copy any files. To remove hdc2:

  mdadm --set-faulty /dev/md2 /dev/hdc2
  mdadm --remove /dev/md2 /dev/hdc2

then swap /dev/hdc for a new drive, and readd it:

  mdadm --add /dev/md2 /dev/hdc2

Warning: this will erase all data on /dev/hdc2.

Note that in your original mail, you did not have any /dev/hdc, so
I am not sure the above is correct. However, this seems to be
a support case, so please contact debian-user@lists.debian.org
instead of this bug report.

> At this point I think we can close this bug [...]

Done.

-- 
 .''`.   martin f. krafft <madduck@d.o>      Related projects:
: :'  :  proud Debian developer               http://debiansystem.info
`. `'`   http://people.debian.org/~madduck    http://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
now I lay me back to sleep.
the speaker's dull; the subject's deep.
if he should stop before I wake,
give me a nudge for goodness' sake.

Attachment: digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


--- End Message ---

Reply to: