[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable "kernel BUG at fs/jbd2/commit.c:534" from Postfix on ext4

On Jun 28, 2011, at 10:16, Ted Ts'o wrote:
>>> My basic impression is that the use of "data=journalled" can help
>>> reduce the risk (slightly) of serious corruption to some kinds of
>>> databases when the application does not provide appropriate syncs
>>> or journalling on its own (IE: such as text-based Wiki database files).
> Yes, although if the application has index files that have to be
> updated at the same time, there is no guarantee that the changes that
> survive after a system failure (either a crash or a power fail),
> unless the application is doing proper application-level journalling
> or some other structured.

Manually rebuilding application indexes and clearing out caches is fine;
with a badly written application I'd have to do that anyways.  I just want
to reduce the risk that I actually corrupt data, and it sounds like that's
what data-journalling will help with.

>> To sum up, the only additional guarantee data=journal offers against
>> data=ordered is a total ordering of all IO operations. That is, if you do a
>> sequence of data and metadata operations, then you are guaranteed that
>> after a crash you will see the filesystem in a state corresponding exactly
>> to your sequence terminated at some (arbitrary) point. Data writes are
>> disassembled into page-sized & page-aligned sequence of writes for purpose
>> of this model...
> data=journal can also make the fsync() operation faster, since it will
> involver fewer seeks (although it will require a greater write
> bandwidth).  Depending on the write bandwidth, you really need to
> benchmark things to be sure, though.

Hm, so this would actually be very beneficial for a mail spool directory
then, because mail servers are supposed to fsync each email received in
order to guarantee that it will not be lost before it acknowledges receipt
to the SMTP client.

Thanks again!

Kyle Moffett

Reply to: