[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#519760: live-initramfs: persistent mis-recognizes RAID0 partition for ext3 fs



Package: live-initramfs
Version: 1.156.1-1

My setup: Lenny, 4 SATA drives, important data on several RAID1 arrays on /dev/sd{a,b}, /tmp in RAID0 (for the speed sake) inside 2 partitions on /dev/sd{c,d}. All RAID partitions are with type 0xfd (Linux RAID auto). Card-reader took /dev/{e,f,g,h} and flash drive is /dev/sdi.

I've set-up a 2GB USB flash drive with DebianLive; tested it out, it worked fine. Decided, I wish to try persistent, made a second partition (ext2) labeled appropriately (live-rw, home-rw, whatever). Reboot with "live persistent" fails during /scripts/live-realpremount, advising me to file a bug report, so here it is.

IMHO, the trouble is that /scripts/live falsely recognizes one of my RAID0 partitions as ext3 partition, effectively failing to mount it and giving up (alphabetically sdc and sdd are before my flash drive, which for DebianLive is sde). After a peek inside boot scripts I noticed the following: fstype recognizes RAID1 partitions as ext3 (which is ok, as they are mirrored copies), but it mis-recognizes first partition of RAID0 array as ext3, too (see inline attachments). As 1st partition of a RAID0 array this partition might carry ext3 signature, but is not a complete file system by itself, but only a half of it (in my case).

My best suggestion is: 1) either auto-mount script to skip some partitions (like linux_raid_member) while looking for fs to keep persistent data, or 2) not to abort on first failure, but if target partition is not found at all.

Inline attachments:

# fdisk -l /dev/sdc4 # 1st partition in RAID0 array
...
/dev/sdc4 1 609 4891761 fd Linux raid autodetect

# fdisk -l /dev/sdd1 # 2nd partition in RAID0 array
/dev/sdd1 1 609 4891761 fd Linux raid autodetect
...

# fstype /dev/sdc4
FSTYPE=ext3
FSSIZE=###

# fstype /dev/sdd1
FSTYPE=unknown
FSSIZE=0

# vol_id /dev/sdc4
ID_FS_USAGE=raid
ID_FS_TYPE=linux_raid_member
ID_FS_VERSION=0.90.0
ID_FS_UUID=<unimportant-guid-here>
ID_FS_UUID_ENC=<same-guid-as-above>
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=

# vol_id /dev/sdd1
ID_FS_USAGE=raid
ID_FS_TYPE=linux_raid_member
ID_FS_VERSION=0.90.0
ID_FS_UUID=<same-guid-as-above>
ID_FS_UUID_ENC=<same-guid-as-above>
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=

# cat /proc/mdstat # lh_build included my RAID conf in initrd?!?
Personalities : [raid0] [raid1]
md5 : active raid0 sdc4[0] sdd1[1]
      9783296 blocks 64k chunks
...



Reply to: