[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Single root filesystem evilness decreasing in 2010? (on workstations)



Clive McBarton wrote:
I find the concept very interesting in principle, although I am not sure
I can recommend it. In some respects single file systems are more
acceptable nowadays. In others they are not. Here are my $.02:

Thank you.

[...]
You trust ext4, and so does Ubuntu. Others (including most distros,
including Debian) do not.

I'm sorry if I should know, but is that a clear position or the general fear around delayed allocation? I'd say that I only trust it for its own integrity management, not that of my data. I don't think anyone should expect that from a filesystem, that's, to my knowledge, what databases are for. Other than that, each application should make the necessary steps to ensure files are correctly flushed on disk (f(data)?sync(),..)

Anyway, one can still disable some features; I hope ext4 will be mature enough in everyone's head by the time Squeeze will be frozen, there are just some features that feel necessary.

[...]
* Specific mount options
mount(8) --bind won't allow me to set
specific options to the remounted tree, I wonder if this limitation can
possibly be lifted.
I have not heard of any way around it, and since you find it annoying,
that speaks against your single filesystem plan.

Yep;  but that's not right, I don't see how it can't be possible.
Can somebody recommend me where I could forward this discussion? The kernel lists? I'm not sure.

[...]
But you backup /home and the rest separately? Should.

Sure (but at filesystem level, not the whole byte-by-byte volume).

* Fragmentation optimization
What's "Fragmentation"? This is Unix ;) But seriously, unless the
difference is really measurable I wouldn't care.

Yes, you're right, especially with delayed allocation.

What's funny is that the physical extents now get fragmented, there's
just no way around it - and I believe that to this date, LVM2's
contiguous policy doesn't allow for defragmentation when it's stuck.
Should it? Is there any noticeable impact? Hard evidence? Benchmarks?

That would be possible to do, so I guess it should, yes, but since that certainly should *not* be a priority for them, well, I'd forget about it.

[...]
If it's under 1%, ignore it.

I hate myself for not keeping track of these when I see them, but I actually managed to dig a benchmark, yes. Shown a greater hit than that (I won't brag) but when you think about it, you'd really have to torture the filesystem to see it. The article was quite old and seemed somewhat random, I wouldn't even trust it much.

* Block/Volume level operations (dm-crypt, backup, ...)
you know of any good benchmark of the main cryptographic virtual
filesystems?
Ignore this issue, CPUs are much faster than needed for this.

Actually, with a fancy raid array and enough disks, you can achieve some throughput that might stress an older CPU, especially if it also has to manage the array in software. Just went up to 1.20 load with a simple sequential write (and that CPU is not *that* old - 64 x2 3800+). I think there's still some way to go to achieve absolute crypto transparency performance wise, and the CPU is the main player here.

* Special block sizes for specific trees
I found a maildir with a 1k block size was more convenient than the
current 4k default
What's the advantage? Hardly size, unless you have more than 10^8 mails.

Well since I don't need speed performance to read my mails, and that the problem doesn't seem that hard to solve, I'd prefer to waste some space on something else than half empty blocks. So, yes, it's to gain space, but more in a "why not?" way.

* (Mad?) positioning optimizations
It's often said some sectors on some cylinders get better performance,
HDDs nowadays only use logical sector numbers. The old h/t/s
3D-interface is just there for compatibility and cannot access the true
h/t/s data of the HDD. Such optimization cannot work.

I found this as an example
http://www.tomshardware.co.uk/forum/250867-14-wd10eavs-disk-performance

And finally, that's something recent. Apparently, um, sequential read at the beginning of the disk can be twice as fast as at the end? I'm not sure if I should feel surprised, but I am. As I just answered myself, that's another argument against a single root filesystem.

[...]
If grub2 breaks, you need another tiny partition, so might as well make
one now. The space loss won't hurt you.

I think everybody should keep a handy recovery live CD around; in fact, one would have enough with a separate partition only if the GRUB LVM/RAID modules break - if the core breaks, it's of no help.

[...]
Ignore swap, that's just small stuff, especially with 3GB. You could
have 64GB and it would still be not that important. Put it on any
partition or file you want.

For a workstation, yeah.

The rule is 1:2 BTW.

Different schools ;-).

Well, here it is;  so, should I do it?
If you feel like tinkering and sorting out problems, then yes. If you
want to just get your computer running and never think about it again,
then no.

I guess that's a perfect conclusion. Thank you again, that helped put some perspective on the matter. Damn, now I'm arguing against it again.

-thib


Reply to: