[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: The fsync issue



On Sat, 2010-11-27 at 01:41:19 -0600, Jonathan Nieder wrote:
> Guillem Jover wrote:
> > Unfortunately that patch does not seem much appealing, it's Linux only,
> > not even in mainline, and it would need for dpkg to track on which file
> > system each file is located and issue such ioctl once per file system.
> >
> > I'd rather not complicate the dpkg source code even more for something
> > that seems to me to be a bug or missfeature in the file system. More so
> > when there's a clear fix (nodelalloc) that solves both the performance
> > and data safety issues in general.
> 
> I don't really understand this point of view: isn't the fsync storm
> going to cause seeky I/O on just about all file systems?

Well sure it might, but then some seem to be able to cope just fine, even
ext4 with nodelalloc. Also seeks might stop being that relevant (in the
mid/long term) once SSD becomes more widespread.

> So the POSIX primitives are not rich enough to express what we want to
> happen.  Delayed allocation is pretty much essential for the use case
> ubifs targets, so it doesn't make much sense to me to pretend it
> doesn't exist.

As long as delayed allocation is a synonym for zero-length files, then
I personally consider it a misfeature. This is data loss we are talking
about, and while data coming from packages is easily recoverable
although cumbersome, user data might not. We got fsck, journals and
similar to recover from system crashes, and now we get zero-length
files in the name of performance, it seems clear to me that's a
regression.

Anyway my thinking process goes a bit like this: There's currently a
handful of programs doing the complete write+fsync+rename dance, with
the file systems which need it penalize heavily. If more programs start
to get "fixed" to do the fsyncs then the situation overall will just
worsen. And then at that point I think it's completely unreasonable
to expect every userland program to add such complexity and unportable
hack over hack to workaround the file system problems.

For non-technical users, data safety should be way more important than
performance, having to recover a hosed system might mean they'd just
reinstall it. For technical users I see the options as follows: help
fix the file system to perform reasonably with fsync() or not lose
data w/o fsync(), use another file system, use other better mount
options, use dpkg --force-unsafe-io and cope with data loss.

But then I think I've said most of this elsewhere already.

> I'll look into a (Linux-specific, obviously) patch to add a function
> that takes an array of paths and performs the relevant syncs of
> filesystems where that ioctl exists tomorrow.  I would rather see a
> system call that just takes an array of paths, since I imagine
> filesystems like btrfs could do something good with that, but since
> there are no VFS primitives for it I can see why that wasn't proposed.

Tracking fds is going to be easier, at that point dpkg already has
the stat information, so it could queue an fd per unique st_dev for
example.

regards,
guillem


Reply to: