[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Problem Replacing LVM on RAID1 Disk



Hi All,

I have a problem replacing a failed disk with a LVM volume on a RAID1 array. Normally in the past when a disk has failed, I have dropped the offending disk from the array, replaced the disk, booted, rebuilt the filesystem on the new disk and re-synced the array. I've done this about four times with this method. However, I recently upgraded from Etch to Lenny. This week, I had a degraded array warning; a disk is failing.

So. I duly repeated the steps to replace the disk but on booting with the new unformatted disk, I get the following error:

"Alert! dev/mapper/vg00-lv01 does not exist ... 
...Dropping to shell"

At the moment, I have had to reinstall the old, failing disk in order to be able to boot and run the system. Has anyone had this problem before? Does anyone know of any solution to it?

I've included the relevant disk/raid configuration at the end of this email. The device /dev/sdb is the one that is failing.

Thanks very much,


Matt

--
# cat /etc/fstab 
# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/mapper/vg00-lv01 /               ext3    defaults,errors=remount-ro 0       1
/dev/md0        /boot           ext3    defaults        0       2
/dev/mapper/vg00-lv00 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

#df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lv01
                      442G   80G  340G  20% /
tmpfs                 1.7G     0  1.7G   0% /lib/init/rw
udev                   10M  720K  9.3M   8% /dev
tmpfs                 1.7G     0  1.7G   0% /dev/shm
/dev/md0              942M   46M  849M   6% /boot

# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sda2[0]
      478512000 blocks [2/1] [U_]
      
md0 : active raid1 sda1[0]
      979840 blocks [2/1] [U_]
      
# lvdisplay /dev/mapper/vg00-lv01
  --- Logical volume ---
  LV Name                /dev/vg00/lv01
  VG Name                vg00
  LV UUID                tvzjKH-hSpH-sDYk-YlWY-osUY-VxrA-ka2UCW
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                448.34 GB
  Current LE             114776
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

The next one is swap space:

# lvdisplay /dev/mapper/vg00-lv00
  --- Logical volume ---
  LV Name                /dev/vg00/lv00
  VG Name                vg00
  LV UUID                aosfiq-oBUr-70Xn-Y5OJ-lsSV-i59V-nTXJG6
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                8.00 GB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

# fdisk -l

Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         122      979933+  fd  Linux raid autodetect
/dev/sda2             123       59694   478512090   fd  Linux raid autodetect

Disk /dev/sdb: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1         122      979933+  fd  Linux raid autodetect
/dev/sdb2             123       59694   478512090   fd  Linux raid autodetect



Reply to: