Re: Need some help with MD raid1 or h/w prob
On Fri, 2005-06-17 at 10:19 -0400, Patrick Flaherty wrote:
> Look you grump, we don't get paid. you will not get 12 hour turn around
> time on questions unless they are extremly simple, or extremly well
> discribed.
I wasn't trying to be nasty in my last post, otherwise I would have used
grumpy language (which I don't think I did). I just stated the facts as
I saw them. I was only apparantly impatient because of the fright I had
with my machine. Time means something different when you're in a panic.
>
> anyways,
> about your raid problems:
> how did you originaly create them (from fresh install, or from
> booting a cd)?
> have the raid devices ever worked correctly?
> Alot of times if you've specified your to boot
> vmlinuz root=/dev/sda2
> instead of
> vmlinuz root=/dev/md1
OK, I'm not good with RAID, hence the vague post to start with.
I configure (I'm sure incorrectly) it using the amd64 netinst CD. I say
incorrectly because I specified 1 active partition & 1 spare partition
per raid1 partition, I should have specified 2 active partitions and 0
spare partitions. Therefore I don't actually have a raid1 setup, just 2
seperate disks not doing much good (as far as raid is concerned anyway).
The setup I was trying to configure at install was:
/boot raid1 on /dev/md0 using /dev/sda1 & /dev/sdb1
/ raid1 on /dev/md1 using /dev/sda2 & /dev/sdb2
/home LVM (HOMELogVol) on raid1 on /dev/md2 using /dev/sda3
& /dev/sdb3
SWAP part (I forget which partition I tried to install it on,but current
config mentioned below)
I have 2 physical disks: /dev/sda, /dev/sdb (both SATA).
According to mdadm, it shows for /dev/md0 and /dev/md1 1 active and 1
spare partition. In EVMS, the GUI shows another problem:
somehow /dev/sda3 has got corrupted and now EVMS shows it holding
both /home and SWAP.
Somehow, probably using EVMS, I need to get the arrays properly
configured, and SWAP deleted, and /home uncorrupted (or both swap
& /home deleted and recreated if need be).
Do you know much about EVMS? I've just read about it since my incident,
installed it via apt-get, looking at the GUI screen to make sense of it.
I won't try any changes until I know more of what I'm doing!
BTW, all my formatting is ext3, except for /boot which is ext2.
> the raid array will not come up correctly. (makes sense, it's already
> mounted the drive) If you are sure you've specified md1 at the boot
> prompt, perhaps you just need to raidhotadd the second drive (raidhotadd
> md0 /dev/sda1). Also make sure that you have either an initrd set up for
> your raid, or a raid enabled kernel or you won't be able to mount your
> root and may be fairly hosed.
To give you a better picture of my setup, here some output that I've
learned to generate: mdadm details for md0
rahdebian:/boot/grub# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Tue May 17 17:57:48 2005
Raid Level : raid1
Array Size : 96256 (94.00 MiB 98.57 MB)
Device Size : 96256 (94.00 MiB 98.57 MB)
Raid Devices : 1
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jun 17 16:25:19 2005
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
UUID : 14c5cc96:c83e55e8:9aa0a6f9:b0a48dff
Events : 0.405
Number Major Minor RaidDevice State
0 8 1 0 active
sync /dev/.static/dev/sda1
1 8 17 - spare /dev/.static/dev/sdb1
------------------------------------------------------------------
mdadm details for md1:
rahdebian:/boot/grub# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue May 17 17:58:00 2005
Raid Level : raid1
Array Size : 9767424 (9.31 GiB 10.00 GB)
Device Size : 9767424 (9.31 GiB 10.00 GB)
Raid Devices : 1
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Jun 17 16:29:02 2005
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
UUID : c756a249:f7e3334a:dd0a538d:cfa602e8
Events : 0.1280434
Number Major Minor RaidDevice State
0 8 2 0 active
sync /dev/.static/dev/sda2
1 8 18 - spare /dev/.static/dev/sdb2
------------------------------------------------------------------
Grub entry for my default boot: (seems OK)
title Debian GNU/Linux, kernel 2.6.8-11-amd64-k8-smp Default
root (hd0,0)
kernel /vmlinuz root=/dev/md1 ro console=tty0
initrd /initrd.img
savedefault
boot
-----------------------------------------------------------------
I would assume that raid is configured into the kernel since I did
everything at install (so that I would have to figure out how to do
everything later on my own). Guess that assumption was pre-mature!
What are the "major" and "minor" numbers that mdadm gives at the end of
its output?
>
> also when you want to fsck a filesystem and it's telling you it's marked
> as clean you need to
> pass it the -f flag to force it. Also you'll want to run that command on
> the raid device not the individual partitions.
Thanks, I'll try that, although the instructions I've read on webpages
have been somewhat confusing to me!
>
> please read the software raid howto for more info. You will probly be
> interested in lvm for partitioning. I've only use ext3 with lvm, but
> i've heard good things about reiserfs.
> http://www.tldp.org/HOWTO/Software-RAID-HOWTO-11.html
I will read this & see how I get on....
Any further suggestions you feel you can make are welcome.
I'll post any concrete results or questions I have back to the list
thread.
I'm assuming I should ensure that all filesystems are clean &
uncorrupted before trying to add a raid device (add my spares onto the
active raid).
--
Rupert Heesom rupert@heesom.org.uk
Reply to: