[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Supermicro SAS controller



On Sun, 06 May 2012 13:47:59 +0000, Camaleón wrote:

> On Sun, 06 May 2012 12:35:40 +0000, Ramon Hofer wrote:
> 
>> On Sun, 06 May 2012 12:18:33 +0000, Camaleón wrote:
> 
>>> If your hard disk capacity is ~1.5 TiB then you can get 3 partitions
>>> from there of ~500 GiB of size (e.g., sda1, sda2 and sda3). For a
>>> second disk, the same (e.g., sdb1, sdb2 and sdb3) and so on... or you
>>> can make smaller partitions. I would just care about the whole RAID
>>> volume size.
>> 
>> Sorry I don't get it.
>> 
>> Let's assume I have 4x 1.5 TB and 4x 2 TB.
> 
> x4 1.5 TiB → sda, sdb, sdc, sdd
> x2 2 TiB → sde, sdf
> 
>> I divide each drive into 500 GB partitions. So three per 1.5 TB and
>> four per 2 TB disk.
> 
> 1.5 TiB hard disks:
> 
> sda1, sda2, sda3
> sdb1, sdb2, sdb3
> sdc1, sdc2, sdc3
> sdd1, sdd2, sdd3
> 
> 2 TiB hard disks:
> 
> sde1, sde2, sde3, sde4
> sdf1, sdf2, sdf3, sdf4
> 
>> Then I put the 28 partitions (4x3 + 4x4) in a raid 6?
> 
> Then you can pair/mix the partitions as you prefer (when using mdadm/
> linux raid, I mean). The "layout" (number of disks) and the "raid level"
> is up to you, I don't know what's your main goal.

The machine should be a NAS to store backups and serve multimedia content.

My point was that it doesn't make sense to me to put several partitions 
from the same hdd to the same raid.
Let's assume one of the 1.5 TB disks fails. Then I'd loose three 
partitions but the raid6 only able to stand two (I hope it's 
understandable what I mean).

So maybe I would have to make 500 GB partitions on each disk and put one 
partition per disk into a separate raid6s. E.g.:
md1: sda1, sdb1, sdc1, sdd1, sde1, sdf1
md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2
md3: sda3, sdb3, sdc3, sdd3, sde3, sdf3
md4: sde4, sdf4

md4 could only be used if I add another partition from another disk. I 
could use 2/3 of the disks of md1-md3 and nothing from md4. In total 8/20 
of the space is used for parity.

When I now add another 1.5 TB disk the available space from md1-md3 
increases by 500 GB each. So now only 8/23 of the space is used for 
parity.

If I add a 2 TB disk *instead* of the 1.5 TB from before I can add a 
partition to each of the four of them and md4 becomes useful. This means 
I have even less parity than before (8/24).

If instead I even add a 3 TB disk, I will create md5 and md6. This lets 
the relative parity increase to 10/26. But when I then add another 3 TB 
disk the parity decreases to 10/32 which is already less than 8/20.


If in the initial setup one 1.5 TB disk fails (e.g. sda) and I replace it 
with a 2 TB disk I will get this:
md1: sda1, sdb1, sdc1, sdd1, sde1, sdf1
md2: sda2, sdb2, sdc2, sdd2, sde2, sdf2
md3: sda3, sdb3, sdc3, sdd3, sde3, sdf3
md4: sda4, sde4, sdf4
Which means that I now have 8/21 parity.


Since I already start with 2x 500 GB disks for the OS, 4x 1.5 TB and 4x 2 
TB I think this could be a good solution. I probably will add 3 TB disks 
if I will need more space or one disk fails: creating md5 and md6 :-)

Or is there something I'm missing?


And when I put all these array into a single LVM and one array goes down 
I will loose *all* the data?
But this won't like to happen because of the double parity, right?


> What I usually do is having a RAID 1 level for holding the operating
> system installation and RAID 5 level (my raid controller does not
> support raid 6) for holding data. But my numbers are very conservative
> (this was a 2005 setup featuring 2x 200 GiB SATA disks in RAID 1 and x4
> SATA disks of 400 GiB. which gives a 1.2 TiB volume).

You have drives of the same size in your raid.

Btw: Have you ever had to replace a disk. When I had the false positive I 
wanted to replace a Samsung disk with one of the same Samsung types and 
it had some sectors less. I used JFS for the md so was very happy that I 
could use the original drive and not have to magically scale the JFS 
down :-)


> Yet, despite the ridiculous size of the RAID 5 volume, when the array
> goes down it takes up to *half of a business day* to rebuild, that's why
> I wanted to note that managing big raid volumes can make things worse
> :-/

My 6 TB raid takes more than a day :-/


Best regards
Ramon


Reply to: