[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: HP proliant ML115 G5 on debian lenny



On Fri, Apr 24, 2009 at 09:41:12AM -0500, Zhengquan Zhang wrote:
> On Fri, Apr 24, 2009 at 10:34:21AM -0400, Douglas A. Tutty wrote:
> > The only __definitive__ way to know would be to take the netinst CD to
> > the box, boot it up and check dmesg (and the installer screens) and see
> > if it sees the drives.  Note that embedded sata "raid" controllers are
> > generally fake raid.  You're better off just using them as normal disk
> > interfaces (don't try to configure as raid) and using software raid from
> > the installer CD.
> 
> This is exactly what I want to know. I 'guess' if it officially supports
> redhat and suse, debian will have no problem seeing the hardware. But I
> am not sure about the so called embedded raid, and could you explain a
> little bit why it is 'fake'?

Well, debian has different requirements re licensing of modules.  Your
guess may be wrong if HP has provided a propriatary module for the
kernel that e.g. suse has included in its kernel but debian can't
include.  For some things (e.g. the nVidia driver), you can still get an
install done and add a module later; for the boot drive that becomes a
bit of a problem :)

So-called 'fake' raid is, as I understand it, hardware that allows you
to configure the raid in the bios, but the actual raid happens in
windows software rather than in the hardware.

> Also, is there any other penalty or downside for using software raid?
> As I know, for RAID 1, the performance is not affected much. 

There is very little performance difference for software raid.  Think
about two scenarios:

1.	 hardware raid

		application tells the OS to tell the logical drive
		(device presented to the OS by the hardware raid card)
		to store some data.  OS waits while the actual disks
		store the data after the hardware has sent the data to
		both disks.

2.	software raid
		
		application tells the OS to tell the md (device
		presented to the rest of the OS kernel by the software raid
		portion of the OS kernel) to store some data.  The OS
		waits while the md driver sends the data to both disks.


For there to be any observable performance hit, the wait while the data
is presented to each disk would have to be considerable; the wait while
the data goes to the disks is the same.
 
> And the server will merely be used for backup.
 
A couple of issues then.  

1.	Performance may or may not be an issue, depending on how many
	other computers will be using the server for data backup at the
	same time.  

2.	With hardware raid, unless the raid card can save the
	configuration to each disk in the array, if something happens to
	the card (which could happen if a drive fails and takes down the
	controller), then the whole array could be caput if you put in a
	new controller card.

3.	With software raid, the configuration is on the disk itself.
	Pop those disks in a new box and they should work (assuming that
	the new box's hardware can be booted by the old box's initrd).

4.	Hardware raid comes into its own with exotic raid types (e.g.
	raid50 or raid60), with hot spares, hot swap, auto rebuild, etc.

5.	There has been some talk recently here on the increased
	liklyhood of raid failue after a single drive failure.
	Apparently, the time it takes for a replacment second drive to
	rebuild makes the liklihood of the other drive failing before
	the rebuild is complete of some concern with very large drive
	sizes.  In this case, having three active raid1 drives with a
	hot spare (4 drives total) is one way to mitigate this risk.

You may need to do lots of research depending on:

1.	The size of your backup set

2.	The importance of the data

3.	The number of locations of the backup data.


Good luck.

Doug.


Reply to: