Re: Big filesystems.
On Tue, 2005-02-15 at 14:59 -0500, Adam Skutt wrote:
> Kyle Rose wrote:
> > What bothers me is file deletion time. Anyone have any clue why
> > ReiserFS takes so long to delete files, and why the delete operation
> > evidently blocks all other FS operations? It seems that ReiserFS
> > should log the delete, and then have a kernel thread handling cleanup
> > in the background in such a way that it doesn't cause other operations
> > to block.
> It's because ReiserV3 has to rebalance two b-trees: one for the
> metadata, and one for the actual data itself. This is slow, especially
> on large directories, I'd imagine.
Surely you aren't implying that Reiser uses anything as pedestrian as a
b-tree! Why, Reiser's tree format is so novel, so utterly perfect, that
no human could have ever thought of it. I understand their patent
applications is sailing through the approval process, greeted by nothing
but disbelief and Hosannas.
Right, so in my experience XFS destroys all other filesystems on
metadata operations. XFS also formats and mounts the volume much more
quickly than Reiser or ext2/3. XFS also supports very large volumes,
very large files, and more directory entries than any of the others. I
used to have a giant test tar archive that only XFS could have extracted
in my expected lifetime.
Unfortunately XFS also repeatedly swallowed a number of my volumes. I
found it to be more unstable than any filesystem I have used (save
VxFS). When using XFS, one must not read from the underlying device, or
one risks corruption. This leads one to believe that using XFS on LVM,
md, or enbd would be somewhat risky. fsck.xfs is sometimes at a loss to
recover anything at all in these situations, even after running for
That said, XFS is still your best choice if you've hit the hard limits