[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#578046: currupts archives on 64bit systems with buffers larger than 2gb




[removed some general mailing lists from cc:]

Overall, I have concluded that correct support for >=2GB afio memory buffers on 32 bit or 64 bit systems opens up a too-large can of testing worms, especially as size_t and int are still 32 bits even on amd64.

So I have resolved this (in my upstream development copy) by making afio reject attempts to use buffers larger than 1.5GB. See the HISTORY file in the upstream repository https://github.com/kholtman/afio for the full details. At least this way there is low risk that afio will silently fail.

Cheers,

Koen.

On Mon, 27 Feb 2012, Koen Holtman wrote:



On Fri, 16 Apr 2010, Yuri D'Elia wrote:

Package: afio
Severity: important
Tags: patch

When the block size * block count equals to 2gb or more, afio corrupts the
archive by truncating all files larger than 2gb.
[...]

Thanks for the bug report and patch, I am reviewing and incorporating it in the upstream.

The patch upgrades some uints to size_t, but it does not upgrade the count argument of readall to size_t, so I think this might lead to the achive ending up corrupt because not all data is written into it if your memory buffer (size * block count) is over 4GB and the file being archived is over 4 GB. I am not entirely sure, I have not tested it to verify, but just a warning: if you upgraded to 4GB memory buffer you might have gotten corrupt archives.

Cheers,

Koen.







Reply to: