[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: install Etch with raid level 10



A J Stiles <deb64@earthshod.co.uk> writes:

> On Wednesday 26 Sep 2007, C M Reinehr wrote:
>> The main thing is that, strictly speaking, GRUB does not support RAID, so
>> you need at least one non-RAID partition as a /boot partition. I say
>> strictly speaking, because you can set up the /boot partition as RAID-1
>> which will be invisible to GRUB bootloader in the MBR.
>
> *No* bootloader supports any kind of RAID.  All any bootloader knows how to do 
> is read in the kernel and the initial ramdisk image from contiguous sectors, 
> and start up the kernel.

Except grub for raid1 (as it is transparent) and lilo for raid0
and raid1.

> However, *if* you ensure that your whole /boot partition fits right within one 
> half-stripe of the RAID0 layer  (i.e. all on the same disk)  then this won't 
> matter  (except you'll have some wasted space on the other disk).  You can 
> easily copy the /boot partition manually to the other device in the RAID1 
> layer, using dd.  The motherboard will always boot from the same drive; then 
> once the kernel is running, the RAID array will be recognised as such and 
> maintained in sync with itself.

Usualy you can enter a boot menu and select a different disk if you
wish or reconfigure the boot order. Also sometimes, just sometimes, a
disk fails so completly that the bios will only see the second one and
boot from there.

All it needs is some care when installing grub on the two MBRs to boot
a raid1 /boot and all will work.

> Also, *don't* use a RAID1 for swap space:  it impacts performance with little 
> practical benefit.  Use separate swap partitions from each drive  (just to 
> keep the partitioning schemes the same)  instead.

Another total untruth. If you don't raid1 your swap the system will
kill applications with segfaults whenever they need to swap something
in. That might not happen while you run (unless you need to buy more
ram) but near certain will prevent applications to shutdown
cleanly. tmpfs is also a good candidate for some minor swapping.

It is true that raid1 needs to write every data twice and that might
be a bottle neck (for example with the slow old PCI bus) but on the
other hand reading random data from raid1 will use both disks and
improve access times. When swapping those could more or less
balance. But seriously, who cares? The moment you really do need swap
all speed goes out of the window and whatever swaps will be dead slow.

If you are concerned with swap speed then you need to buy more ram.

MfG
        Goswin



Reply to: