[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Storage server



Am Freitag, 7. September 2012 schrieb Stan Hoeppner:
> On 9/7/2012 12:42 PM, Dan Ritter wrote:
[…]
> > Now, the next thing: I know it's tempting to make a single
> > filesystem over all these disks. Don't. The fsck times will be
> > horrendous. Make filesystems which are the size you need, plus a
> > little extra. It's rare to actually need a single gigantic fs.
> 
> Whjat?  Are you talking crash recovery boot time "fsck"?  With any
> modern journaled FS log recovery is instantaneous.  If you're talking
> about an actual structure check, XFS is pretty quick regardless of
> inode count as the check is done in parallel.  I can't speak to EXTx
> as I don't use them.  For a multi terabyte backup server, XFS is the
> only way to go anyway.  Using XFS also allows infinite growth without
> requiring array reshapes nor LVM, while maintaining striped write
> alignment and thus maintaining performance.
> 
> There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
> production today, and I know of one over 300TB and one over 500TB,
> attached to NASA's two archival storage servers.
> 
> When using correctly architected reliable hardware there's no reason
> one can't use a single 500TB XFS filesystem.

I assume that such correctly architected hardware contains a lot of RAM in 
order to be able to xfs_repair the filesystem in case of any filesystem 
corruption.

I know RAM usage of xfs_repair has been lowered, but still such a 500 TiB 
XFS filesystem can contain a lot of inodes.

But for upto 10 TiB XFS filesystem I wouldn´t care too much about those 
issues.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


Reply to: