Re: moving '/' to a new disk
Easy. First mount /dev/md0 somewhere remporary like /mnt, then do
root# tar cplf - -C / . | tar xvpf - -C /mnt
the "l" option will make sure tar only archives the local root filesystem,
not mounted filesystems. This takes care of /proc as well.
Check that the /tmp permissions are 1777, which will break the manpages
among other things; I seem to recall they were changed once when I did
this, though it might have been me forgetting to include the "p" option to
tar.
You need to change fstab and lilo.conf to reflect the changes, but since
you're not asking I suspect you know about this.
Additionally, for reference:
If you want your system to survive a disk crash, you will be wanting to
swap on RAID1 as well. Make two equal-size partitions on your two RAID
disks, make a RAID1 out of them, make a swap area out of them, and put
it in the fstab.
root# mkswap -v1 /dev/mdX
With /boot on one disk and everything else on RAID1, your system will
survive a disk crash but if the disk that died was the one with /boot on
it, you won't be able to reboot. It's not too hard to put /boot in a
RAID1 as well.
The same goes for the bootloader. If the disk that dies is the one with
the bootloader in its MBR, you won't be able to reboot. With /boot in a
RAID1, you can install the bootloader on the MBR of both /dev/sda and
/dev/sdb so you can boot from either disk, selecting the boot device from
your SCSI bios. This makes sure your system remains functional no matter
which disk dies. You can't do this unless /boot is in a RAID1.
I also notice you seem to be using a "sacrificial disk", /dev/sdc, which
won't be a part of your array in the end. This is not necessary. With the
new raidtools, you can include partitions on the disk with your initial
non-RAID system on it in any RAID array, even if they don't exist yet,
i.e. even if you will be re-partitioning the disk before adding it to the
RAID. You include these partitions in any arrays as failed-disk entries
to start with. Then once you have made sure everything is OK and you can
boot into the RAID system, you can, if necessary, re-partition the disk
with the non-RAID system, change the raidtab to show the failed-disk
partitions as raid-disk, and raidhotadd them to the arrays they belong to.
Since implementing all the above on the servers at work, my life has been
a lot easier.
Best regards,
George Karaolides 8, Costakis Pantelides St.,
tel: +35 79 68 08 86 Strovolos,
email: george@karaolides.com Nicosia CY 2057,
web: www.karaolides.com Republic of Cyprus
On Mon, 10 Sep 2001, tim wrote:
> I currently Have:
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/sdc1 2064144 16892 1942372 1% /
> /dev/sda1 15522 2943 11778 20% /boot
> /dev/md1 4001600 965072 3036528 25% /usr
> /dev/md2 3356180 325708 3030472 10% /var
>
> I want:
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/md0 2064144 16892 1942372 1% /
> /dev/sda1 15522 2943 11778 20% /boot
> /dev/md1 4001600 965072 3036528 25% /usr
> /dev/md2 3356180 325708 3030472 10% /var
>
> where raidtab will be:
> #/
> raiddev /dev/md0
> raid-level 1
> nr-raid-disks 2
> persistent-superblock 1
> chunk-size 16
> device /dev/sda3
> raid-disk 0
> device /dev/sdb3
> raid-disk 1
>
> #/usr
> raiddev /dev/md1
> raid-level 0
> nr-raid-disks 2
> persistent-superblock 1
> chunk-size 16
> device /dev/sda5
> raid-disk 0
> device /dev/sdb5
> raid-disk 1
>
> #/var
> raiddev /dev/md2
> raid-level 0
> nr-raid-disks 2
> persistent-superblock 1
> chunk-size 16
> device /dev/sda6
> raid-disk 0
> device /dev/sdb6
> raid-disk 1
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
>
Reply to: