[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID questions/advise



On Wed, Sep 24, 2008 at 12:59:57PM -0400, Michael S. Peek wrote:
> Hi guys,
>
> I've had a couple of RAID boxes ticking away in the corner for years now  
> without a problem.  But now our needs have expanded, and I'm looking to  
> build replacements.  Big replacements.  And I consider myself to be  
> anything but an expert in the field, especially where mdadm is  
> concerned.  So I have a few questions to ask in hopes that someone out  
> there can help me out.

haven't built anything of the size you are looking at but I thought I
would give you my experiences

>
> How large of an array can mdadm handle?

I have a 10T raid5 array, which is being handled by lvm

>
> If I use my hardware RAID cards in JBOD mode, how does the kernel handle  
> naming drives when there's more than 26 drives on the system?  (i.e.  
> what does it do when it reaches /dev/sdz and there are drives left to be  
> named?)

from memory it goes to /dev/sdaa and on, but with udev you can call them
what you want. Plus mdadm doesn't really care about drive naming, it
will scan for signatures

>
> From what I hear, ext3 can handle filesystems up to 32TB in size, but  
> has anyone actually done this?  Can anyone attest to how well it works?   
> Or is there another filesystem type that's better suited to large (12TB  
> - 32TB) filesystems?

Depending on what you want to put on there, I would suggest xfs (with
ups).  setting up a filesystem of this size, is determinate on what you
are using it for, for example you could carve it up into lots of smaller
partition sizes and them mount them under one mount point - faster
recovery times


>
> Finally, according to the mdadm FAQ, when a drive goes down:
>> 19. What should I do if a disk fails?
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>   Replace it as soon as possible:
>>         mdadm --remove /dev/md0 /dev/sda1
>>     halt
>>     <replace disk and start the machine>
>>     mdadm --add /dev/md0 /dev/sda1
>
> Since my OS drive will not be a part of the RAID (it'll have a mirrored  
> RAID of it's own), I presume that the halt command won't be necessary.   
> I assume that it would be perfectly reasonable of me to remove the drive  
> and replace it while the system is running?  I.e. mdadm can handle  
> running in degraded mode for the duration of the replacement/rebuild  
> process?  (This is a deal-breaker question -- if mdadm can't, then I'll  
> have to pursue other measures.  Hotswap drives will be up and running at  
> all times though, so I presume I can configure mdadm to make use of them  
> immediately upon detecting a drive failure.)

I have never had a problem with removing drive on a live system - once
they are failed out. I believe the reason to halt the system is old,
when hot swap drive where not common (USB, eSATA, etc..)

>
> My other option is to get a SAN/NAS of some type, but building machines  
> like this has proven to be very effective and cheap compared to  
> SAN/NAS'es, at least in the past.  I haven't checked recently though,  
> but any advice is welcome.

This all depends on what you aims/goals are, does you server have the
pci bandwidth to handle so many drives at once ?  how much money do you
want to spend? How fast do you want the fs system - more spindles or
fatter drives  etc etc 

for something this large I would suggest raid6 + hot spare 

>
> Pondering my options,
>
> Michael
>
>
> -- 
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org with a 
> subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
>
>

-- 
"You're probably wondering why somebody who has been in politics is talking about Social Security. After all, it's been called the third rail of American politics. You grab a hold of it, and you get electrified."

	- George W. Bush
03/04/2005
South Bend, IN

Attachment: signature.asc
Description: Digital signature


Reply to: