[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Single root filesystem evilness decreasing in 2010? (on workstations)

Clive McBarton wrote:
google "ext4 kde4" and the first hit is "Data loss may occurr when using
ext4 and KDE 4". I think Ubuntu offered ext4 as optional then and many
people ran into problems, supposedly massive data loss. XFS would be the
same. Application programmers don't cope with delayed allocation, and
since you cannot fix all the apps, you'd be stuck with the problem.

Apart from specific technical issues, there's general conservatism, most
of all in Debian.

[off topic]

Yep, I get that, and I know we can't possibly fix everything, but do we really need to? In a way, all these apps are not even POSIX compliant for the operations they intend to do, they're crappy-ext3-lazy-standards compliant. Nobody should support that. Linux supports many filesystems, and we shouldn't be stuck with only one for pseudo-reliable usage because a single one of its generations had an odd behavior *some* people (not the entire world) based their software upon. I understand people have lived happily with XFS for a long time, which "suffers" from the same "problems".

What's worse, in my opinion, is that people feel safer with ext3 than with ext4. What's the difference *for these apps*? Their data gets automatically flushed every 5 seconds instead of every 30-35 seconds (because of data=ordered). Great. If that's all that matters, let's just change some arbitrary numbers in ext4..


I think the real buzz around this issue comes from this user who got his DE personal conf and "some personal data" files nuked because he crashed before they could reach the disk - which spread fear and panic in exaggerated ways among bloggers (I read the story a hundred times). This could have happened in ext3 with another bad timing.

Applications should tell when to sync if they need to, not rely on a particular filesystem to do it "frequently enough" (whatever that means). That allows software that do *not* need to (or need it less often) to be more efficient. In some cases, a temporary file might never need to reach the platters. Let it happen more often.

That beeing said, I tend to think that there may be fewer software which would really need some love than we think. Note that in the meantime, there are hacks to force flush truncated files and such that are available, which should help apps that don't receive that love but still need it. Again, few software probably needs some.

Paranoids about delayed allocation can also disable it. If Debian and some other distros are and decide to do it (and/or use hacks), I certainly wouldn't mind personally (it's not like we've ever been forced to accept anything) but I would [mind] if we decide to hold back on ext3 - just because it seems stupid to ignore every other features ext4 comes with, based on a funny story on a random bug tracker.

Anyway, that topic is out of my league, and people talked about it in way deeper depth than that at various places - I believe I understand enough of it to make my mind. Anybody feel free to correct me if I sound misguided.

That's a very interesting point. Filesystems *not* responsible for data
integrity? Whow. While I do get the idea (move integrity checking up to
higher-level structures to improve thruput), and I am sure it will speed
things up greatly when it works, doesn't this require all your software
to first be rewritten to take care of it?

AFAIK, there has never been such a contract that filesystems should guarantee it. In fact, they can't possibly do. If some data in file A only makes sense if there's some specific data in file B, for example, only the application that writes them knows - the filesystem can't detect the corruption of the data if one file has been written but not the other.

If data integrity is important for an application, its writer should always have the question "OK, what if it crashes *there*?" in mind and think about an ad-hoc mechanism to make the operation atomic, in the sense that fits the data structure of the application. If one wants some generic integrity features, there's plenty of database software around - and by that I also mean simple embedded/nosql stuff which could even write plain text.

Filesystem journals only guarantee that you won't get backstabbed by the system losing or overwriting a block part of a previously coherent data structure.

[and back on, sorry for that]

> [...]
Your request is perfectly reasonable. It is clearly possible in theory,
and I believe some Unix OS actually have it (don't know which though).
It is actually required for some backup schemes (which hence don't work
under Linux).

Good to know.

Quick googling gave me http://lwn.net/Articles/281157/ where they say
the limitation exists up to 2.6.25 kernels (the article is from 2008

Whoa, thanks, I couldn't dig that one up.  I'll go ask.
Apparently, it wasn't included in 2.6.26 (silently fails on a Lenny machine), and I don't have a 2.6.32 kernel to play with right now; I'll try when I can. I've now seen a few blog posts of people using it randomly dating back from last year, even with other options - not sure if they actually tested it, but it's a good sign. Can't find anything conclusive.

Sure. That's not fiddling with individual sectors and 3D coordinates on
the HDD, but simply using partitions at the beginning of the disk. If
you care about a factor 2, then do partition it.

Yep, sorry, what I wrote in the original post was misleading. I hadn't anything much in mind other than that.


Reply to: