LVM, mdadm and /etc/fstab
Currently running Debian sarge, with 2.6.12.2 kernel.
I have my boot disk setup as raid 1 on an internal disk mirrored to the other
internal disk (md0).
I also have an external jbod rack with 8 disks which I have set up as 4 pairs
at RAID 1 (md1 - md4).
This external jbod of RAID 1 md's is mounted as one huge LVM lvol
(vg01/lvol1), so that I can stripe across all the disks for I/O responsiveness
and also to get the size up to something useable.
Everything works fine until I reboot. If I have /dev/vg01/lvol1 listed in the
/etc/fstab the boot process halts when it gets to the vg01 entry. It complains
that there is no vg01/lvol1 available and puts me into the interactive boot
session. I then login as root, execute a vgscan and then exit, and the boot
completes successfully.
I have moved the /etc/init.d/lvm sym links to rc1.d level, with the hopes that
the vgscan would be done prior to the scanning of /etc/fstab, however same
results.
If I remove the vg01/lvol1 entry from the /etc/fstab file, the system boots
normally, except of course the lvol1 is not mounted. Which I then do manually.
I am not quite certain as to where to activate the vg's in the boot process so
that the /etc/fstab entries are processed normally without errors.
Any ideas???
-jpg
Here are the contents of my pertinent files.
/etc/fstab:
============================================================================
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
#/dev/sda1 / ext3 errors=remount-ro 0 1
/dev/md0 / ext3 defaults 0 0
/dev/md5 swap swap defaults,pri=1 0 0
/dev/md6 swap swap defaults,pri=1 0 0
proc /proc proc defaults 0 0
/dev/fd0 /floppy auto user,noauto 0 0
/dev/cdrom /cdrom iso9660 ro,user,noauto 0 0
/dev/vg01/lvol1 /striped_lvol ext3 rw,errors=remount-ro 0 2
none /proc/bus/usb usbfs defaults,noatime 0 0
============================================================================
/proc/mdstat
============================================================================
Personalities : [raid1]
md1 : active raid1 sde1[1] sda1[0]
17775808 blocks [2/2] [UU]
md2 : active raid1 sdf1[1] sdb1[0]
17775808 blocks [2/2] [UU]
md3 : active raid1 sdg1[1] sdc1[0]
17775808 blocks [2/2] [UU]
md4 : active raid1 sdh1[1] sdd1[0]
17775808 blocks [2/2] [UU]
md5 : active raid1 sdj2[1] sdi2[0]
2096384 blocks [2/2] [UU]
md6 : active raid1 sdj3[1] sdi3[0]
2096384 blocks [2/2] [UU]
md0 : active raid1 sdj1[1] sdi1[0]
31366784 blocks [2/2] [UU]
unused devices: <none>
============================================================================
/sbin/vgdisplay -v vg01
============================================================================
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 2
Cur LV 1
Open LV 1
Max PV 8
Cur PV 4
Act PV 4
VG Size 67.80 GB
PE Size 4.00 MB
Total PE 17356
Alloc PE / Size 17356 / 67.80 GB
Free PE / Size 0 / 0
VG UUID XMzyCe-vNZI-CLsf-LYdB-sZ5y-Qj1J-grpGbh
--- Logical volume ---
LV Name /dev/vg01/lvol1
VG Name vg01
LV UUID wZ369I-1K0N-1CCW-J9Pi-LjNg-x9I9-oFfzQ2
LV Write Access read/write
LV Status available
# open 1
LV Size 67.80 GB
Current LE 17356
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Physical volumes ---
PV Name /dev/md1
PV UUID nIzZaf-b7lO-QT7n-IYyv-xjyf-P3dZ-qurTXj
PV Status allocatable
Total PE / Free PE 4339 / 0
PV Name /dev/md2
PV UUID 0OBdsG-1Km5-J4Hv-YGQG-Fh2R-ZrbT-lReL2P
PV Status allocatable
Total PE / Free PE 4339 / 0
PV Name /dev/md3
PV UUID yBWiyk-gdBw-HcDP-ZQWw-3wQU-0BUP-XHEKMC
PV Status allocatable
Total PE / Free PE 4339 / 0
PV Name /dev/md4
PV UUID 0AJ930-TZ2X-51x0-7uVQ-hxpa-XLAY-7Nljz4
PV Status allocatable
Total PE / Free PE 4339 / 0
============================================================================
Reply to: