[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: SOLVED: Software-RAID1 on sarge (AMD64)



Goswin von Brederlow wrote:
Kilian <kil@gnu.ch> writes:

In the last few days, I was struggling to convert a remote machine
with two identical SATA disks (sda and sdb) to a Software RAID
1. Especially the boot-part was tricky as I had no console access to
the machine. The whole procedure was done remotely via SSH. I use the
md tools (mdadm) and lilo as bootloader. I chose LILO because IMHO
it's more straightforward in this setup than GRUB and I have no other
Operating Systems I would want to boot.

The system was installed on the first disk, the second one has not
been used before. Those are the steps I went through:


1.  Install a Software-RAID capable kernel and boot the system with it;
     Install the md tools: 'apt-get install mdadm';

Meaning any Debian kernel. :)

True, mine had it as a module though, which meant initrd, and since I was working remote, I didn't want to bring in another pitfall which meant compiling the kernel with RAID support built into it.

2.  partition the second harddrive (sdb). I created two partitions, a
     large one at the beginning of the disk (sdb1) and a small
     swap-partition at the end (sdb2). I do not use separate /boot
     partitions.

NOTE: disk speed differs by around a factor of 2 between start and
end. Which one is the fast one can depend on the disk but usualy the
start is. Better swap there.

I didn't know that, thanks for the hint!

     NOTE: I do not use two swap spaces on the two disks; instead, I
     create a RAID array consisting of the two smaller partitions on the
     two discs and create the swap space on it. In case of a disk
     failure, I don't need to reboot the system because the swap space
     is also on RAID. Otherwise, a disk failure would toast one swap
     space, probably leaving the system in a unusable state until
     rebooted.

It would cause processes to segfault all over and take down the system.

I knew there was a reason ;-)

     Important: both partitions need to be of the type 0xFD "Linux raid
     autodetect"

Actualy not. mdadm can work just as well without it. Doesn't hurt though.

Didn't know that either, thanks.

[...]
     I use XFS as filesystem because it has such nice features as online
     resizing etc and is, IMHO, very stable and mature. Of course you can
     use whatever you like.

As does ext3, even more so.

Let's not start a filesystem flamewar, you'd propably win ;-)

5.  Copy the existing Debian system to the new RAID

     $ mkdir -p /mnt/newroot
     $ mount /dev/md0 /mnt/newroot
     $ cd /
     $ find . -xdev | cpio -pm /mnt/newroot

Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
size of /proc/kcore.

As pointed out by Michael Schmidt, -xdev takes care of that. Of course if there are several filesystems on the original disk, you'd have to copy each separately.

Thanks for your suggestions!

	-- Kilian



Reply to: