Re: Questions about RAID 6
> On 4/30/2010 6:39 PM, Ron Johnson wrote:
>> On 04/26/2010 09:29 AM, Tim Clewlow wrote:
>>> Hi there,
>>> I'm getting ready to build a RAID 6 with 4 x 2TB drives to start,
>> Since two of the drives (yes, I know the parity is striped across
>> the drives, but "two drives" is still the effect) are used by
>> RAID 6 with 4 drives doesn't seem rational.
> We've taken OP to task already for this, but I guess it bears
> Use multiple HW controllers, and at least 7-8 drives, I believe was
> consensus, given that SW RAID 6 is a performance loser and losing a
> controller during a rebuild is a real ruin-your-week kind of moment.
> But while some of us were skeptical about just how bad the
> of RAID 5 or 6 really is and wanted citation of references, more of
> just questioned the perceived frugality. With four drives, wouldn't
> RAID 10 be better use of resources, since you can migrate to bigger
> setups later? And there we were content to let it lie, until...
>>> but the intention is to add more drives as storage requirements
>>> My research/googling suggests ext3 supports 16TB volumes if block
>> Why ext3? My kids would graduate college before the fsck
>> ext4 or xfs are the way to go.
> I have ceased to have an opinion on this, having been taken to task,
> myself, about it. I believe the discussion degenerated into a
> banter over the general suitability of XFS, but I may be wrong about
> Seriously, ext4 is not suitable if you anticipate possible boot
> problems, unless you are experienced at these things. The same is
> of XFS. If you *are* experienced, then more power to you.
> Although, I
> would have assumed a very experienced person would have no need to
> the question.
> Someone pointed out what I have come to regard as the best solution,
> that is to make /boot and / (root) and the usual suspects ext3 for
> safety, and use ext4 or XFS or even btrfs for the data directories.
> (Unless OP were talking strictly about the data drives to begin
> with, a
> possibility I admit I may have overlooked.)
> Have I summarized adequately?
First off, thank you all for the valuable information and experience
laden information. For clarity, the setup has always been intended
to be: one system/application drive, and, one array made of separate
drives; the array protects data, nothing else. The idea is for them
to be two clearly distinct entities, with very different levels of
protection, because the system and apps can be quite quickly
recreated if lost, the data cannot.
More clarity, the data is currently touching 4TB, and expected to
exceed that very soon, so I'll be using at least 5 drives, probably
6, in the near future. Yes, I know raid6 on 4 drives is not frugal,
I'm just planning ahead.
My reticence to use ext4 / xfs has been due to long cache before
write times being claimed as dangerous in the event of kernel lockup
/ power outage. There are also reports (albeit perhaps somewhat
dated) that ext4/xfs still have a few small but important bugs to be
ironed out - I'd be very happy to hear if people have experience
demonstrating this is no longer true. My preference would be ext4
instead of xfs as I believe (just my opinion) this is most likely to
become the successor to ext3 in the future.
I have been wanting to know if ext3 can handle >16TB fs. I now know
that delayed allocation / writes can be turned off in ext4 (among
other tuning options I'm looking at), and with ext4, fs sizes are no
longer a question. So I'm really hoping that ext4 is the way I can
I'm also hoping that a cpu/motherboard with suitable grunt and fsb
bandwidth could reduce performance problems with software raid6. If
I'm seriously mistaken then I'd love to know beforehand. My
reticence to use hw raid is that it seems like adding one more point
of possible failure, but I could be easily be paranoid in dismissing
it for that reason.