[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Supermicro SAS controller



On Sun, 06 May 2012 18:10:41 +0000, Camaleón wrote:

> On Sun, 06 May 2012 17:44:54 +0000, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 15:40:50 +0000, Camaleón wrote:
> 
>>> Okay. And how much space are you planning to handle? Do you prefer a
>>> big pool to store data or you prefer using small chunks? And what
>>> about the future? Have you tought about expanding the storage
>>> capabilities in a near future? If yes, how it will be done?
>> 
>> My initial plan was to use 16 slots as raid5 with four disks per array.
>> Then I wanted to use four slots as mythtv storage groups so the disks
>> won't be in an array.
>> But now I'm quite fscinated with the 500 GB partitions raid6. It's very
>> flexible. Maybe I'll have a harder time to set it up and won't be able
>> to use hw raid which both you and Stan advice me to use...
> 
> It's always nice to have many options and true is that linux softare
> raid is very pupular (the usual main problem for not using is when high
> performance is needed and when doing a dual-boot with Windows) :-)

Yes and I need neither of those things :-)


>>>> You have drives of the same size in your raid.
>>> 
>>> Yes, that's a limitation coming from the hardware raid controller.
>> 
>> Isn't this limitation coming from the raid idea itself?
> 
> Well, no, software raid does not impose such limit because you can work
> with partitions instead.
> 
> In hardware raid I can use, for example, a 120 GiB disk with 200 GiB
> disk and make a RAID 1 level but the volume will be of just 120 GiB. (I
> lose 80 GiB. of space in addition to the 50% for the RAID 1 :-/).

But you can't build a linux software raid with a 100 GB and a 200 GB disk 
and then have 150 GB?


>> You can't use disks with different sizes in a linux raid neither? Only
>> if you divide them into same sized partitions?
> 
> Yes, you can! In both, hardware raid and software raid. Linux raid even
> allows to use different disks (SATA+PATA) while I don't think it's
> recommended becasue of the bus speeds.

What I mean was the space difference is lost in either ways?


>>> I never bothered about replacing the drive. I knew the drive was in a
>>> good shape because otherwise the rebuilding operation couldn't have
>>> been done.
>> 
>> So you directly let the array rebuild to see if the disk is still ok?
> 
> Exactly, rebuilding starts automatically (that's a default setting, it
> is configurable). And rebuiling always ends with no problem with the
> same disk that went down. In my case this happens (→ the array going
> down) because of the poor quality hard disks that were not tagged as
> "enterprise" nor to be used for RAID layouts (they were "plain" Seagate
> Barracuda). I did not build the system so I have to care about that for
> the next time.

I'd like using green drives for this system. So low power consumption is 
a thing I try keep low. And until now they worked well (one false 
positive in two years is ok)


>>>> My 6 TB raid takes more than a day :-/
>>> 
>>> That's something to consider. A software raid will use your CPU cycles
>>> and your RAM so you have to use a quite powerful computer if you want
>>> to get smooth results. OTOH, a hardware raid controller does the RAID
>>> I/O logical operations by its own so you completely rely on the card
>>> capabilities. In both cases, the hard disk bus will be the "real"
>>> bottleneck.
>> 
>> I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
>> when I have to rebuild all the arrays :-)
> 
> Mmm... I'd consider adding more RAM (at least 8 GB) though I would
> prefer 16-32 GB) you have to feed your little "big monster" :-)

That much :-O

Ok, RAM is quite cheap and it shouldn't affect power consumption with in 
comparison to >20 hard disks.


Best regards
Ramon


Reply to: