[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Single root filesystem evilness decreasing in 2010? (on workstations)

Hash: SHA1

I find the concept very interesting in principle, although I am not sure
I can recommend it. In some respects single file systems are more
acceptable nowadays. In others they are not. Here are my $.02:

> * Filesystem corruption containment
> I use ext4, and I've read enough about it to trust its developers for my
> workstations.  I don't think that's a risky bet.  
You trust ext4, and so does Ubuntu. Others (including most distros,
including Debian) do not.

In fact, I believe
> this old statement dates back to when we hadn't journals, in the ext2 days.

It does not date back at all. Filesystem checks on ext3 can still take
hours on a perfectly clean filesystem. The quotient of read speed to
capacity of drives gets smaller with every new HDD generation,
converging to zero.

> * Free space issues
You are right on this one, single workstations have least free space
issues without partitions.

> * Specific mount options
> mount(8) --bind won't allow me to set
> specific options to the remounted tree, I wonder if this limitation can
> possibly be lifted. 
I have not heard of any way around it, and since you find it annoying,
that speaks against your single filesystem plan.

> * System software replacement
> For a workstation, I don't need a fast system recovery mechanism, and I
> want to minimize my backup sizes. 
But you backup /home and the rest separately? Should.

> * Fragmentation optimization
What's "Fragmentation"? This is Unix ;) But seriously, unless the
difference is really measurable I wouldn't care.

> What's funny is that the physical extents now get fragmented, there's
> just no way around it - and I believe that to this date, LVM2's
> contiguous policy doesn't allow for defragmentation when it's stuck. 
Should it? Is there any noticeable impact? Hard evidence? Benchmarks?

> I also know the performance hit is minimal, the PE
> sizes can be and are typically quite big, but..  it's still there and
> should be avoided if possible.
If it's under 1%, ignore it.

> there's an online
> defragmenter for ext4 I can afford to run regularly now.
I have not heard of fragmentation being a problem even with ext2.

> * Metadata (i-node) table sizes
Ignore this, +1T or not +1T. Unless you run out of inodes, it won't matter.

> * Block/Volume level operations (dm-crypt, backup, ...)
> you know of any good benchmark of the main cryptographic virtual
> filesystems?  
Ignore this issue, CPUs are much faster than needed for this.

> * Special block sizes for specific trees
> I found a maildir with a 1k block size was more convenient than the
> current 4k default
What's the advantage? Hardly size, unless you have more than 10^8 mails.

> * (Mad?) positioning optimizations
> It's often said some sectors on some cylinders get better performance,
HDDs nowadays only use logical sector numbers. The old h/t/s
3D-interface is just there for compatibility and cannot access the true
h/t/s data of the HDD. Such optimization cannot work.

> * Boot obligations
>  I guess
> you'd still need a separate boot partition if you're stuck with another
> boot loader.  
If grub2 breaks, you need another tiny partition, so might as well make
one now. The space loss won't hurt you.

> * Swap special-case
> I'm just OK with my three gigs.  The 1:1
> mem:swap rule has got to be wasting space here, hasn't it?
Ignore swap, that's just small stuff, especially with 3GB. You could
have 64GB and it would still be not that important. Put it on any
partition or file you want.
The rule is 1:2 BTW.

> Well, here it is;  so, should I do it?
If you feel like tinkering and sorting out problems, then yes. If you
want to just get your computer running and never think about it again,
then no.
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org


Reply to: