[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Supermicro SAS controller



On Sun, 06 May 2012 14:52:19 +0000, Ramon Hofer wrote:

> On Sun, 06 May 2012 13:47:59 +0000, Camaleón wrote:

>>> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
>> 
>> Then you can pair/mix the partitions as you prefer (when using mdadm/
>> linux raid, I mean). The "layout" (number of disks) and the "raid
>> level" is up to you, I don't know what's your main goal.
> 
> The machine should be a NAS to store backups and serve multimedia
> content.

Okay. And how much space are you planning to handle? Do you prefer a big 
pool to store data or you prefer using small chunks? And what about the 
future? Have you tought about expanding the storage capabilities in a 
near future? If yes, how it will be done?
 
> My point was that it doesn't make sense to me to put several partitions
> from the same hdd to the same raid.

Of course, you have to pair the partitions between different hard disks 
and raid levels.

> Let's assume one of the 1.5 TB disks fails. Then I'd loose three
> partitions but the raid6 only able to stand two (I hope it's
> understandable what I mean).

If one of the disks which is used on a raid 6 fails, you won't lose 
anything, that's what a raid layout is for. To start worring, 3 different 
disks which are part of a raid 6 array must fail.

> So maybe I would have to make 500 GB partitions on each disk and put one
> partition per disk into a separate raid6s. E.g.: md1: sda1, sdb1, sdc1,
> sdd1, sde1, sdf1 md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2 md3: sda3,
> sdb3, sdc3, sdd3, sde3, sdf3 md4: sde4, sdf4

Yes, that's a possible setup. md1, md2 and md3 will be 3 raid 6 volumes 
with (3 TiB - 1 TiB) = 2 TiB of available space.
 
(...)

> Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x
> 2 TB I think this could be a good solution. I probably will add 3 TB
> disks if I will need more space or one disk fails: creating md5 and md6
> :-)
> 
> Or is there something I'm missing?

I can't really tell, my head is baffled with all that parities, 
partitions and raid volumes 8-)

What you can do, should you finally decide to go for a linux raid, is 
creating a virtual machine to simulate what will be your NAS environment 
and start testing the raid layout from there. This way, any error can be 
easily reverted with no other annoying side-effects :-)

> And when I put all these array into a single LVM and one array goes down
> I will loose *all* the data?

You only lose data if 3 of the disks die at the same time. Period. LVM 
will just expand the possibilities of the size space in the arrays.

> But this won't like to happen because of the double parity, right?

Exactly.

>> What I usually do is having a RAID 1 level for holding the operating
>> system installation and RAID 5 level (my raid controller does not
>> support raid 6) for holding data. But my numbers are very conservative
>> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and x4
>> SATA disks of 400 GiB. which gives a 1.2 TiB volume).
> 
> You have drives of the same size in your raid.

Yes, that's a limitation coming from the hardware raid controller.

> Btw: Have you ever had to replace a disk. 

Never. In ~7 years. All were false positives.

> When I had the false positive I wanted to replace a Samsung disk with
> one of the same Samsung types and it had some sectors less. I used JFS
> for the md so was very happy that I could use the original drive and
> not have to magically scale the JFS down :-)

I never bothered about replacing the drive. I knew the drive was in a 
good shape because otherwise the rebuilding operation couldn't have been 
done.
 
>> Yet, despite the ridiculous size of the RAID 5 volume, when the array
>> goes down it takes up to *half of a business day* to rebuild, that's
>> why I wanted to note that managing big raid volumes can make things
>> worse :-/
> 
> My 6 TB raid takes more than a day :-/

That's something to consider. A software raid will use your CPU cycles 
and your RAM so you have to use a quite powerful computer if you want to 
get smooth results. OTOH, a hardware raid controller does the RAID I/O 
logical operations by its own so you completely rely on the card 
capabilities. In both cases, the hard disk bus will be the "real" 
bottleneck.

Greetings,

-- 
Camaleón


Reply to: