Re: Supermicro SAS controller
On Sun, 06 May 2012 19:27:59 +0000, Ramon Hofer wrote:
> On Sun, 06 May 2012 18:10:41 +0000, Camaleón wrote:
>>>>> You have drives of the same size in your raid.
>>>>
>>>> Yes, that's a limitation coming from the hardware raid controller.
>>>
>>> Isn't this limitation coming from the raid idea itself?
>>
>> Well, no, software raid does not impose such limit because you can work
>> with partitions instead.
>>
>> In hardware raid I can use, for example, a 120 GiB disk with 200 GiB
>> disk and make a RAID 1 level but the volume will be of just 120 GiB. (I
>> lose 80 GiB. of space in addition to the 50% for the RAID 1 :-/).
>
> But you can't build a linux software raid with a 100 GB and a 200 GB
> disk and then have 150 GB?
Of course. But still you can use the remainded (non-raided) space for
another non-vital usage (small secondary backup/data partition, a boot
partition, for swap...). Although this is not recommended, it can be
useful in some scenarios.
>>> You can't use disks with different sizes in a linux raid neither? Only
>>> if you divide them into same sized partitions?
>>
>> Yes, you can! In both, hardware raid and software raid. Linux raid even
>> allows to use different disks (SATA+PATA) while I don't think it's
>> recommended becasue of the bus speeds.
>
> What I mean was the space difference is lost in either ways?
For the raided space, yes, but still you can "redistribute" the disk
better.
>>> So you directly let the array rebuild to see if the disk is still ok?
>>
>> Exactly, rebuilding starts automatically (that's a default setting, it
>> is configurable). And rebuiling always ends with no problem with the
>> same disk that went down. In my case this happens (→ the array going
>> down) because of the poor quality hard disks that were not tagged as
>> "enterprise" nor to be used for RAID layouts (they were "plain" Seagate
>> Barracuda). I did not build the system so I have to care about that for
>> the next time.
>
> I'd like using green drives for this system. So low power consumption is
> a thing I try keep low. And until now they worked well (one false
> positive in two years is ok)
Remember that a raided system is more exigent than a non-raided one. If
one of that "green" disks which is part of a raid level is put in stand-
by/sleep mode and does not respond as quickly as mdadm expects, the raid
manager can think the disk is lost/missing and will mark that disk as
"failed" (or will give I/O erros...), forcing a rebuild, etc... :-/
Those "green" disks can be good for using them as stand-alone devices for
user backup/archiving but not for 24/365 nor a NAS nor something that
requires quick access and fast speeds such a raid.
>>> I have an i3 in that machine and 4 GB RAM. I'll see if this is enough
>>> when I have to rebuild all the arrays :-)
>>
>> Mmm... I'd consider adding more RAM (at least 8 GB) though I would
>> prefer 16-32 GB) you have to feed your little "big monster" :-)
>
> That much :-O
For RAM you never-ever get enough :-)
> Ok, RAM is quite cheap and it shouldn't affect power consumption with in
> comparison to >20 hard disks.
Exactly, your system will be happier and you won't have to worry in
increasing it for a near future (~5 years). My motto is "always fill your
system with the maximum amount of RAM, as much as you can afford", you
won't regret.
Greetings,
--
Camaleón
Reply to: