[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Problem with RAID1 on kernel 2.4

Hi All

I have just spent many hours trying to setup raid1 on a machine with 
an hpt366/htp370 ide chipset.

The machine has 3 ide hard drives as raid 1 + 1 hot spare, and a 
CD Rom, each device has its own IDE interface.

The chipset has 4 ide ports and is supported on kernel 2.4.  The 
chipset has raid "features" but as I understand it these are 
implemented via a software disk driver, typically on Windows.  
There are patches for kernel 2.2 and some weird drivers from the 
manufactures web site which I think do the same under Linux.

However kernel 2.4 has native support for the chipset and the other 
development seems to have stopped.  With 2.4 running I was 
presented with /dev/hda, dev/hdc, /dev/hde, /dev/hdg for the drives.  
I installed linux raid1 for raid support.

I installed a standard debian 2.4.17 kernel and just enough 
packages out of woody to get it going.  The rest is potato.  After a 
long night I think have got it all going.  However there are some 
areas that I am still not sure of:

1)  The initrd is massive about 3mB, I hope that means I will always
    have all the modules I will ever need at boot time, and I assume
    the RAM is freed up by the time the system is running.  I
    increased the size of my boot partition to 15 mB, but otherwise
    this is not really a problem. 

    Notwithstanding the above, I put a long list of modules in both
    /etc/modules and /etc/mkinitrd/modules.  (ide stuff, md, raid1,
    ext2 ext3 etc), I am not sure how much of this was necessary. 

2)  Then I had endless problems with raid1.  It seems that the
    "failed-disk" directive in /etc/raidtab does not work.  I think
    it has something to do with devfs - which is compiled into the
    standard "woody" 2.4 kernel. 

    proc/mdstat shows the drives with their devfs names not the old
    /dev/hd.. names.  

    While all the other directives seemed to work, using standard
    /dev/hd.. names and I could build the raid, if I did a raidstop,
    followed by raidstart, it would not start again.  Rather it gave
    me an error relating to the partition listed as "failed-disk". 
    The only way to get it running again was with a mkraid
    --really-force option. 

    I tried installing debian's devfsd package but did not solve
    the problem.  Maybe there is some clever customization required
    to make it work. 

    Putting the full devfs names into /etc/raidtab did not work. 
    Maybe I did not have everything setup correctly or I got the
    names wrong.  I could not find any devfs devices in the /dev

    After lots of manipulation I managed to build a working system
    from a single disk to raid1 on all partitions, without relying
    on failed-disk, and it all seems to be working now. 

I am not sure how much is related to the chipset, or whether this is a 
known issue with kernel 2.4.  In hindsight, I should have compiled a 
new kernel without initrd or devfs and made all the raid and ide 
modules built in.  I actually tried this but after two or three 
compilations without getting a kernel with the right configuration, I 
thought doing it the other way might be faster.

Has anybody else been down this road yet?


Ian Forbes ZSD
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa

Reply to: