On 24/07/12 09:50 PM, Bob wrote:
Hi I'm trying to upgrade my personal web server & I have a 4 port
SATA2 PCI card with the 4 Hard Drives connected, I'm putting a 1GB
swap partition at the front of each of the 4 500GB drives and the rest
is / in an mdadm software RAID5 configuration.
I know you cant boot a RAID5 system directly so I've tried having
/boot on a 4GB CF card connected to the on-board IDE bus, and also on
2 in a RAID1 configuration,
Actually, you can boot directly into a RAID5 array using Debian/Wheezy.
However, depending on your disk drives, it may not happen automatically.
The problem I've encountered is that the Debian installer doesn't
install Grub on all the disks in the array. It may not be on the one
that your machine tries to boot from.
The solution is simple - boot into a rescue console and install grub on
all the disks.
I've found this to be preferable to the older standby of creating a
RAID1 /boot partition. With kernels getting larger, you need every
larger /boot partitions to handle more than a couple. That ends up
causing problems fairly quickly if people aren't used to removing old
kernels.
So my advice is to just try to boot directly into RAID5 using a single /
partition (although I would advise having a separate /home partition).
Simply upgrade to Wheezy, which is becoming pretty stable now, install
grub on all 4 drives, and enjoy.
NOTE: with Wheezy there is no problem booting into a partitioned RAID5
array either. It works. I know because I have a machine that does it. So
have /dev/md0p1 as / and /dev/md0p2 as /home. That way there is less
chance of a file system corruption wiping out everything.