[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Software RAID1 install notes - Debian Sarge



Hi guys,

I am following software RAID1 installation instructions posted to debian-user 
in August of last year. I find problems with his instructions as I progress, 
and have finally gotten stuck, see "STUCK HERE" bellow.

Any suggestions where to find good software RAID1 install instructions?

Any idea how I can get past the "STUCK HERE" point bellow?

Here's a copy of Giuseppe's instructions with  my install notes included in 
double paretheses.

Thanks

Roger
TEFLChina.org

**************************

Following Giuseppe's

Software-Raid1 Root in Woody
http://lists.debian.org/debian-user/2003/debian-user-200308/msg01507.html
 
To: debian-user@lists.debian.org
Subject: Software-Raid1 Root in Woody
From: giuseppe bonacci <g.bonacci@libero.it>
Date: Fri, 8 Aug 2003 18:11:07 +0200
 

((	Roger

Roger's notes appear within double parens. Thus "I" means Roger in here and 
Giuseppe elsewhere.

I, Roger, installed Woody on RAID1 following as best I could Giuseppe's 
footsteps.

Hardware (Roger's)
40G IDE drives (2) hda, hdc
CDROM drive, hdd
Floppy drive, fd0

In BIOS set as bootable, and in this order, these:
fd0
hdd
hda
hdc

I worry that having one of my 40G drives on the same controller (same ribbon 
cable) as the CDROM may slow down access to it. I will check later and report 
here: XXXXXXXXXXX

Use 80 wire 40 pin IDE cables. The courser, 40 wire 40 pin cables are slower.

))


Hi all.

This document briefly describes the steps needed to install a Debian
Woody GNU/Linux system with root on a software raid1 device.
It took me a fair amount of trials and errors before I got it right, so
I would like to share my current knowledge.

<snip (disclaimer)>

Since this is a test setup, the environment is going to be small, and
supported by the debian vanilla kernel:
- Pentium III
- SCSI disk 0:0, sda, 160 Mb
- SCSI disk 0:1, sdb, 160 Mb
I used SCSI disks because I had them easily available, but the procedure
should work for IDE disks on (e.g.) hda and hdc as well.

1.  Install Debian Woody.

I started from floppies (vanilla kernel) + network, and partitioned
the disks as follows:

((	Roger

- Boot from Woody CD#1, bf24

- Partition hda and hdc
	hda1	00098 MB spare Linux partition (in case I need it)
	hda2	01998 MB Swap for swap tempororily (later md0 for swap)
	hda3	37959 MB Linux for / temporarily (later md1 for  /)
	-----
	hdc1	00098 MB spare Linux partition (in case I need it)
	hdc2	01998 MB md0 for swap
	hdc3	37959 MB md1 for /
	hdc4 	01053 MB spare Linux partition (this drive is this much bigger than the 
other one. So this is spare space outside the RAID and I might as well have 
it available for what not)

- mount swap and /

- Installed kernel and kernel modules

- Configure Device Driver Modules:
	:: Drivers for Network Devices > VIA Rine Support.
   and
	:: RAID1 (I loaded all the RAID drivers because I don't know exactly what I 
need)

- Tasksel > http > "ucberkeley" > Standard UNIX Server

))

# fdisk -l /dev/sda

Disk /dev/sda: 64 heads, 32 sectors, 160 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1       160    163824   83  Linux

# fdisk -l /dev/sdb

Disk /dev/sdb: 64 heads, 32 sectors, 160 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1   *         1       160    163824   fd  Linux raid autodetect

Then I made an ext2 filesystem and installed a minimal system on /dev/sda1.
Better avoid installing further packages until the raid is set up, while
it might be a good idea to update the system with

((	Roger
I wanted to upgrade the distribution to Testing, so did,

# vi sources.list
	(edit out security sources and do :%s/stable/testing/)
# apt-get update
# apt-get dist-upgrade
))

# apt-get update
# apt-get upgrade

2.  Switch to kernel 2.4.

((	Roger
I already had kernel 2.4.18 because I booted install CD with 'bf24'.
So I followed Giuseppe's footsteps but choose an appropriate more recent 
kernel,

# dpkg -P lilo
# apt-cache seache kernel-image
# apt-get install grub kernel-image-2.4.24-1-686
# grub-install '(hd0)'

))

I prefer to use raidtools2 with a stock 2.4 series kernel.
Moreover, despite later versions of LILO being easier to setup, I
decided to stick on GRUB.  So the next steps were:

# dpkg -P lilo
# apt-get install grub kernel-image-2.4.18-686
# grub-install '(hd0)'

Notice that for processors different from Pentium III you should pick a
flavour different from '-686'.

((	Roger
# cat > /boot/grub/menu.lst
title   Debian
    root    (hd0,2)
    kernel  /vmlinuz root=/dev/hda3 ro
    initrd  /initrd.img

IMPORTANT:

- I believe Giuseppe meant to put ">" there in the 'cat' sample line. Clearly 
it is needed.

- my 'root' is (hd0,2), not (hd0,0), because my root is on partition hda3, not 
hda1 (and Grub counts partitions starting from 0, so 3rd partition is 2).

- In the "kernel" line I put /dev/hda3, not /dev/sda1, because my root is on 
3rd partition, not 1st partition, and mine drives are IDE, not SCSI.
))

Create /boot/grub/menu.lst like this:

# cat /boot/grub/menu.lst
title   Debian
    root    (hd0,0)
    kernel  /vmlinuz root=/dev/sda1 ro
    initrd  /initrd.img

WARNING: Don't install devfsd. Device paths change and initrd-tools
get confused.

Reboot.

3.  Create and populate the Raid 1 structure in degraded mode.

Install raidtools2:

((	Roger
Durring the following 'apt-get install raidtools2', I chose 'no' when asked if 
I wanted to have raidtools2 put RAID startup into the system initialization 
script, because I configured RAID into the kernel when I installed, and so 
things should work this way I think.
))

# modprobe -k raid1    # needed to prevent preinst from complaining
# apt-get install raidtools2

((	Roger
- Again, I think Giuseppe meant to 'cat > /etc/raidtab', not 'cat /etc/
raidtab'.

- Giuseppe has sdb as raid-disk 0, sda as raid-disk 1. That seems backwards to 
me, but I tried it the other way and 'mkraid' didn't like it. So I guess for 
this goofy procedure the order needs to be "good" raid-disk first, "broken" 
raid-disk second.

- I have have two RAID partitions, Giuseppe has only one Raid partition. So I 
have these:
	md0 (hda2 & hdc2) for swap
	md1 (hda3 & hdc3) for /

- My /etc/raidtab I created like this:

# cat > /etc/raidtab
raiddev /dev/md0
        nr-raid-disks           2
        raid-level              1
        persistent-superblock   1
        chunk-size              4

        device          /dev/hdc2
        raid-disk       0

        device          /dev/hda2
        raid-disk       1

        failed-disk             1

raiddev /dev/md1
        nr-raid-disks           2
        raid-level              1
        persistent-superblock   1
        chunk-size              4

        device          /dev/hdc3
        raid-disk       0

        device          /dev/hda3
        raid-disk       1

        failed-disk             1
))


Create /etc/raidtab like this:

# cat /etc/raidtab
raiddev /dev/md0
        nr-raid-disks           2
        raid-level              1
        persistent-superblock   1
        chunk-size              4

        device          /dev/sdb1
        raid-disk       0

        device          /dev/sda1
        raid-disk       1

        failed-disk             1

Notice the failed-disk directive at the end of the file. It specifies
that the current root disk, /dev/sda1, is to be ignored for the moment,
and marked `failed' in the raid superblock.

Now make up the raid:

((	Roger
# mkraid /dev/md0
# mkraid /dev/md1
))

# mkraid /dev/md0

If this is your n-th trial with n>1, you might be forced to
use the '-f' option to mkraid. Read carefully the warning before
proceeding.

((	Roger
# less /proc/mdstat
(NOT /proc/mdstatus)
))

You can check the status by looking at /proc/mdstatus

Build an ext2 filesystem on /dev/md0 and copy the whole system on it
(that's why it's useful to keep it minimal)

((	Roger
# mke2fs -O sparse_super,filetype /dev/md1
# mount /dev/md1 /mnt             <------------------------- STUCK ON THIS----
mount: wrong fs type, bad option, bad superblock on /dev/md1,  <-------------
       or too many mounted file systems

# find / -xdev -depth | cpio -pmdu /mnt
))

# mke2fs -O sparse_super,filetype /dev/md0
# mount /dev/md0 /mnt
# find / -xdev -depth | cpio -pmdu /mnt

Modify /mnt/etc/fstab and /mnt/boot/grub/menu.lst to mention md0 or sdb in
place of sda1:

# diff /etc/fstab /mnt/etc/fstab
4c4
< /dev/sda1     /               ext2    errors=remount-ro       0 1
---
> /dev/md0      /               ext2    errors=remount-ro       0 1

# diff /boot/grub/menu.lst /mnt/boot/grub/menu.lst
2,3c2,3
<       root    (hd0,0)
<       kernel  /vmlinuz root=/dev/sda1 ro
---
>       root    (hd1,0)
>       kernel  /vmlinuz root=/dev/md0 ro

Now build an initrd for the new setup:

# mkinitrd -o /mnt/boot/initrd.img-2.4.18-"your flavour" -k -r /dev/md0

The option '-k' instructs mkinitrd to keep the expanded image under
/tmp/mkinitrd.XXX/initrd instead of deleting it.
Notice that /mnt/initrd.img -> /boot/initrd.img-YYYY, so don't try to be
too smart.

4.  Reboot with root on /dev/md0 and synchronize the disks.

Reboot.  So far, the system is still configured to  boot from the first
disk and leave the second alone.
Now we have to manually interrupt grub's boot sequence in order to
launch the second disk instead of the first:
Ask grub for a command line (press 'c') and at the 'grub>' prompt
issue the command:

grub> configfile (hd1,0)/boot/grub/menu.lst

You should get the same menu item as before, but meaning a different sequence
(use 'e' to examine it, then ESC to return to the menu).  Press RETURN
to boot.

You should end up with linux running and /dev/md0 mounted as /.

Now it's time to change the partition type of the old root disk from 83 to 
FD:

# fdisk -l /dev/sda

Disk /dev/sda: 64 heads, 32 sectors, 160 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1       160    163824   fd  Linux raid autodetect

Now modify /etc/raidtab to remove the last line ('failed-disk...'), and
attach the old root partition as a plex of the raid1 structure:

# raidhotadd /dev/md0 /dev/sda1

The md driver starts synchronization, that can be checked by looking at
/proc/mdstat.

Rebuild the initial ramdisk with a clean raidtab (paranoid mode on):

# mkinitrd -k -o /initrd.img

Setup grub on both disks:

# (echo 'root (hd1,0)'; echo 'setup (hd1)'; echo quit) | grub
# (echo 'root (hd0,0)'; echo 'setup (hd0)'; echo quit) | grub

Done.

Now you can reboot (just to test the setup works), play with dselect
to your pleasure, etc.

Again: if you happen to install devfsd you might end up with an unbootable
system, and no way to recover.

As an exercise to the reader, try disabling one of the disks and test
your ability to recover... If you find a way out of the Kernel panic,
I'll be glad to know.  ;-)

best regards

-- gb


 
*************************




Reply to: