[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Single root filesystem evilness decreasing in 2010? (on workstations)



Hello,

Usually I never ask myself whether I should organize my disks into separate filesystems or not. I just think "how?" and I go with a cool layout without thinking back - LVM lets us correct them easily anyway. I should even say that I believed a single root filesystem on a system was "a first sign" (you know what I mean ;-).

But now I'm about to try a new setup for a Squeeze/Sid *workstation*, and I somehow feel I could be a little more open-minded. I'd like some input on what I covered, and more importantly, on what I may have missed. Maybe someone can point me to some actually useful lists of advantages to "partitioning"? I find a lot of BS around the net, people often miss the purpose of it.

So, what are the advantages I see, and why don't they matter to me anymore?


* Filesystem corruption containment

I use ext4, and I've read enough about it to trust its developers for my workstations. I don't think that's a risky bet. In fact, I believe this old statement dates back to when we hadn't journals, in the ext2 days.


* Free space issues

Since I'm the only one who uses this machine, I should know if something may go wrong and eat up my entire filesystem (which is quite big for a workstation). Yes, I still monitor them constantly.


* Specific mount options

According to the Lenny manpage, mount(8) --bind won't allow me to set specific options to the remounted tree, I wonder if this limitation can possibly be lifted. If not, I think a dummy virtual filesystem would do the trick, but that seems kludgy, doesn't it? Pointers?

I guess I could live without it, but I would actually find this quite annoying.


* System software replacement

Easier to reinstall the system if it's on separate volumes than conf and data? Come on..

For a workstation, I don't need a fast system recovery mechanism, and I want to minimize my backup sizes. I'd rather save a list of selections rather than a big archive of binaries.


* Fragmentation optimization

One of the most obvious advantages, and usually my main motivation to separate these logs, spools, misc system variable data, temporary directories, personal data, static software and configuration files. Sometimes I think I overdo it, at least for a workstation, and it becomes hard to guess all these sizes. I hate a useless gigabyte, so I often need to make use of LVM's black magic, in the end, despite all hard work.

What's funny is that the physical extents now get fragmented, there's just no way around it - and I believe that to this date, LVM2's contiguous policy doesn't allow for defragmentation when it's stuck. I'm not sure I mind that much, I just don't feel it's the purpose of LVM to worry about that. I also know the performance hit is minimal, the PE sizes can be and are typically quite big, but.. it's still there and should be avoided if possible.

How? Well.. why should I trust LVM more than ext4 on the fragmentation of my data? Delayed allocation is very effective and there's an online defragmenter for ext4 I can afford to run regularly now.


* Metadata (i-node) table sizes

I'm not sure at all about this one, since I've never read anything about it in the context of extx filesystems, and I must admit I'm not familiar with the exact architecture of these tables, but I figure this could be true in some way: a big table can be longer to seek (depending on the methods) - splitting trees in different filesystems, and thus splitting metadata in different tables should speed up their accesses.

Sorry if I mislead anyone in the case this problem is not even relevant for ext4, I just know it to be true for some filesystems. At least it was.

Anyway, I don't have the time to measure the impact of this, so if anybody know of some numbers somewhere, that would be great. For now, I consider this negligible. (For the record, the volume will be "quite big" - +1T.)


* Block/Volume level operations (dm-crypt, backup, ...)

Encryption (with LUKS) in particular should beat any implementation at filesystem level. I don't have any number to back that up, however (although I remember seeing some).

I guess I can spare a few more CPU cycles on what I want to encrypt; do you know of any good benchmark of the main cryptographic virtual filesystems? I suppose eCryptfs must be a bit over the other FUSE-based projects, since it works in the kernel.

As said earlier, I don't need a fast backup solution. I already prefer smarter filesystem-based backup systems in general.


* Special block sizes for specific trees

I found a maildir with a 1k block size was more convenient than the current 4k default, for example. The solution is simple, "use a database".

Any recommendation for a NoSQL DB for this? I'll switch to mutt. BTW, I like the simplicity of not having to export messages explicitely, so a readable virtual filesystem interface to the DB is always great. Do I overdo? Maybe a simple mounted tarball would suffice? Nah.. that doesn't sound right.

Oh, no, I don't really like mboxes.  You guessed?


* (Mad?) positioning optimizations

It's often said some sectors on some cylinders get better performance, thus people sometimes carefully choose the position of critical trees by placing them on specific filesystems on specific volumes.

Yeah..

Well, since I still want maximum performance, I'd love to see numbers on a *recent* hard disk proving this, if somebody has some, but until then, I'll forget about it.

LVM won't theoritically guarantee the physical position of the logical volumes anyway. And I'll need it if I do any partitioning.


* Boot obligations

GRUB2 has RAID and LVM modules now, so this no longer applies. I guess you'd still need a separate boot partition if you're stuck with another boot loader. Is there any other advantage to it I haven't thought about since I always did it systematically?


* Swap special-case

My setup will still have its little dedicated swap space, it's pretty obvious how this can optimize it (no underlying filesystem, maybe on another unsafe but faster RAID0 volume, fast dm-crypt encryption, ...)

There are however some neat dymanic swap allocation projects out there that would help me not lose these gigabytes I never seem to be using (at all). I figured, with all this RAM I could think of the swapping space as a mere rescue space to prevent OOM rampages - and nothing else. In *my* case, even buffers and cached pages never get to be pushed on disk after weeks without rebooting. I'm just OK with my three gigs. The 1:1 mem:swap rule has got to be wasting space here, hasn't it?




Well, here it is;  so, should I do it?

Thanks in advance for your help. I hope I could make you think twice about it too or maybe provide people with other needs with a little checklist to better design their layout.

-thib


PS Sorry if I missed a recent thread on this topic - my searches only lead me to old discussions when we hadn't some relevant techs (ext4, grub2).


Reply to: