[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Storage server



On 9/7/2012 12:42 PM, Dan Ritter wrote:

> You can put cheap SATA disks in, instead of expensive SAS disks.
> The performance may not be as good, but I suspect you are
> looking at sheer capacity rather than IOPS.

Stick with enterprise quality SATA disks.  Throwing "drive of the week"
consumer models, i.e. WD20EARS, in the chassis simply causes unnecessary
heartache down the road.

> Now, the next thing: I know it's tempting to make a single
> filesystem over all these disks. Don't. The fsck times will be
> horrendous. Make filesystems which are the size you need, plus a
> little extra. It's rare to actually need a single gigantic fs.

Whjat?  Are you talking crash recovery boot time "fsck"?  With any
modern journaled FS log recovery is instantaneous.  If you're talking
about an actual structure check, XFS is pretty quick regardless of inode
count as the check is done in parallel.  I can't speak to EXTx as I
don't use them.  For a multi terabyte backup server, XFS is the only way
to go anyway.  Using XFS also allows infinite growth without requiring
array reshapes nor LVM, while maintaining striped write alignment and
thus maintaining performance.

There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
production today, and I know of one over 300TB and one over 500TB,
attached to NASA's two archival storage servers.

When using correctly architected reliable hardware there's no reason one
can't use a single 500TB XFS filesystem.

-- 
Stan


Reply to: