[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Disk /dev/md6 doesn't contain valid partition table



Francesco,

On Wed 29 October 2008 06:16, Francesco Pietra wrote:
> On Wed, Oct 29, 2008 at 11:40 AM, Alex Samad <alex@samad.com.au> wrote:
> > On Wed, Oct 29, 2008 at 08:24:55AM +0100, Francesco Pietra wrote:
> >> On Wed, Oct 29, 2008 at 7:06 AM, Douglas A. Tutty <dtutty@vianet.ca> 
wrote:
> >> > On Wed, Oct 29, 2008 at 05:44:31AM +0100, Francesco Pietra wrote:
> >
> > [snip]
> >
> >> "cat /proc/mdstat:
> >> Personalities : [raid1]
> >> md6 : active raid1 sda8[0] sdb8[1]
> >>       102341952 blocks [2/2] [UU]
> >>
> >> md5 : active raid1 sda7[0] sdb7[1]
> >>       1951744 blocks [2/2] [UU]
> >>
> >> md4 : active raid1 sda6[0] sdb6[1]
> >>       2931712 blocks [2/2] [UU]
> >>
> >> md3 : active raid1 sda5[0] sdb5[1]
> >>       14651136 blocks [2/2] [UU]
> >>
> >> md1 : active(auto-read-only) raid1 sda2[0] sdb2[1]
> >>       6835584 blocks [2/2] [UU]
> >>
> >> md0 : active raid1 sda1[0] sdb1[1]
> >>       2931712 blocks [2/2] [UU]
> >>
> >> md2 : active raid1 sda3[0] sdb3[1]
> >>       14651200 blocks [2/2] [UU]
> >
> > This is off topic but, just a comment, it might be better instead of
> > having lots of md's to have a big md raid1 and then sit lvm on top of it
>
> I am no system maintainer. I set up a raid1 according to the
> installation notes on Debian, I believe. At any event, this is the
> present situation. I must confess that a raid1 becomes dirty on power
> failure, although I expected that it works on one disk failure (as it
> happened to me once).
>
> Well, what about the following recipe that I found on internet? Could
> that be applied in my case as described? Thenks, francesco:
>
> 1. shutdown all processes and databases using the array. lsof /dev/md0
> is your friend.
> 2. Full backup, in addition to the usual nightly ones.
> 3. Stop the array mdadm -S /dev/md0
> 4. Added the drive back into the array. In this case,
> mdadm /dev/md0 --add /dev/sdb1
> 5. Sit back and watch progress, watch -n 1 cat /proc/mdstat
> 6. Restart, dmesg says
> raid1: device sdc1 operational as mirror 1
> raid1: device sdb1 operational as mirror 0
> raid1: raid set md0 active with 2 out of 2 mirrors
> md: ... autorun DONE.
> ============================================
>
> > [snip]
> >
> >> Thanks
> >> francesco
> >>
> >> > If so, you can write it back.
> >> >
> >> > Doug.

I'm not clear as to exactly what your problem is, but here are some thoughts.

First, Knoppix is your friend. In a case like this, where you are not sure of 
your file system integrity, I would boot from a Knoppix CD and then examine & 
repair each file system as necessary.

WRT the supposedly missing partition tables, I don't think that they are 
missing, as they never were there in the first place. Your partition tables 
for your two actual drives are in place. When I run `fdisk -l` here, I get 
similar results as you. (I have two hard drives with two raid-1 arrays 
defined.) The md devices do not have partition tables.

WRT /opt, there is no entry in /etc/fstab for /opt, so I can only conclude 
that it is not mounted on a seperate file system, but is a part of the root 
file system.

HTH!

cmr

PS	I agree with Alex regarding LVM2. I have only two partitions defined on my 
hard drives, one each for two md arrays. The first md device is for my boot 
partition. The second for everything else. The everything else, then, is 
managed by LVM2 with logical volumes for each seperate file system. LVM2 is a 
little intimidating but once up & running is much easier to manage.

-- 
Debian 'Etch' - Registered Linux User #241964
--------
"More laws, less justice." -- Marcus Tullius Ciceroca, 42 BC


Reply to: