[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Create a raid on sparc guide



On Sun, 2003-08-17 at 14:50, Rob Wultsch wrote:
>  > For learning.  My employer lost a day's mail for about 3000 people
>  > recently due to a failure of a RAID 0+mirror setup (two separate RAID 0
>  > systems, I believe each having 4-drive arrays).  A drive in the main
>  > RAID 0 array failed, and then a drive in the mirror array failed when
>  > restoring.  (The chances of this are pretty slim, but they might have
>  > been lower if there was a RAID 1+0 setup instead of two RAID 0's.)  So
>  > this pair of RAID 0's constituted UAVED, "unreliable array of very
>  > expensive disks".  Very expensive due to the loss of data and the labor
>  > to go back and restore from tape old mail stored on this server system.
>  > (It's an IMAP mail repository.)  A loud prompt for me to learn about
>  > RAIDing.
>  > -- SP
> 
> 
> I am sorry, but I have to ask, why did your employer not go with a 
> RAID-5 setup with extra disks? I am guessing that because of the number 
> of drives (4, aka 2 ide chains?) you are using IDE/ATA? So why not 
> something like 4 x 40 gig drives in RAID 5 with 1 spare drive? I am 
> talking a bit out of my ass, but curisosity compels me to ask...

I'm sure that our IT decision-makers re-examined the storage
architecture after the fluke of both parts of the mirror failing.  With
>10K users on it (multiple drive arrays probably on a cluster of CPUs)
it's a key enterprise system.


> This is my first experience with raid and although there is some very 
> nice documentation ( 
> http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-4.html#ss4.6
> is the best I have found), for a person to setup raid on the cheap or on 
> a sparc they have not addressed all the issues.

As someone else noted, soft RAID should not be much different on SPARC
vs any other platform, given similar kernel vintages and the CPU
capacity to take advantage of RAID 0 speed or RAID 1 redundancy.


> On the Cheap: I do not have extra disk laying around to install debian 
> on to transfer to a raid later. So I instead installed debian on a 
> partition that I would later use for swap. Also as I did not have a smp 
> or a raid kernel availible, and was not connected to a network I had to 
> compile my own kernel with raid *(not as a module)* and smp.

I'd be using a non-RAID root+boot+swap drive.

> My thoughts are that a short guide should speak to: 1. Types of 
> applicable Raid (linear, 0 and 1 for the SS and just about everything 
> for the ultras I think)

So you're thinking one pair of internal drives entirely?  My experience
will be complementary to yours, using a pair or perhaps a trio of
external drives.

Used Sun external boxes are cheap today.  A 911 can hold four 1.6"
drives giving plenty of RAID flexibility.  Small SCA-to-50 adapters will
even fit, though I'm not sure if four of them will all fit.  (The 911
may not have enough cooling capacity for four fast drives anyway.)

>   2. Partitioning 3. Installing the a bare bones 
> OS and compiling a new kernel 4. Setting up the RAID and transfering /.

These topics are mostly generic for Linux soft RAID.

I also have some 7[356]00 PowerMacs to play with, with fast-narrow SCSI
internally.  These boxes have capacity for three drives internally if
one does without a CD; possibly the drive on the CD cable would be
running at 5 MB/sec so that could be root+swap and the other two drives
could be a RAID pair.

Using SILO after copying / to a new drive is something I couldn't find a
reference for.  I got it to work reasoning from LILO and quik (old
PowerMac booter) experience.

> The version of fdisk that comes on a debian stable disk 1 has an issuse. 
> If you try to put a swap as the first partition on a drive fdisk tells 
> you that you are wasting your time because you are destroying the 
> partition table. However if you try to make a partition of type fd 
> (linux raid autodetect) you do not get this warning, although the you 
> are destroying the partition table. This cost me something on the order 
> of 2 hours to figure out and I was unhappy when it was over.

Important warning.  Perhaps the first partition should start on cylinder
1 instead of 0?  Also I believe that Silo needs some room at the
beginning of the drive for its first-stage loader.  Or do these have to
be inside the first partition?

> It is worth not that the bandwidth of a SS20 is 10 MB/sec (I think) so 
> performacne increases for most user will not be incredibly dramatic. I 
> setup my machine in RAID 0 because I did not have enough room a single 1 
> gig drive to anything interesting, so my choices became linear or RAID 
> 0. Linear is not very sexy, and most of the issuse are exactly the same.

Doesn't the SS20 use wide drives internally?  Fast/wide would be 20
MB/sec.  The external interface on the motherboard is narrow.

So what software does one use to benchmark an array?

And what software would one use to test robustness of mirroring and
parity?


I'm also curious about the portability of an external array.  Endianness
permitting, is an ext2 RAID box for Linux portable across platforms?

-- SP



Reply to: