Re: Safe File Update (atomic)
* Ted Ts'o <email@example.com> [110105 19:26]:
> So one of the questions is how much should be penalizing programs that
> are doing things right (i.e., using fsync), versus programs which are
> doing things wrong (i.e., using rename and trusting to luck).
Please do not call it "wrong". All those programs doing is not
requesting some specific protection. They are doing file system
operations that are totally within the normal abstraction level of
file system interfaces. While some programs might be expected to
anticipicate cases not within that interface (i.e. the case that
due to some external event the filesystem is interupted in what it
does and cannot complete its work), that is definitly not the
responsibility of the average program, especially if there is no
interface for this specific problem (i.e. requesting a barrier to only
do a rename after the new file is actually commited to disk).
So the question is: How much should the filesystem protect my data in
case of sudden power loss? Should it only protect data where the program
explicitly requested something explicitly, or should it also do what
it reasonably can to protect all data.
Having some performance knobs so users can choose between performance
and data safety is good. This way users can make decisions depending
what they want.
But a filesystem losing data so easily or with a default setting losing
data so easily is definitly not something to give unsuspecting users.
Bernhard R. Link