[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: CVE-2017-9935 / tiff



Hi Brian,

I tried looking at this when I prepared the last tiff and tiff3 updates
a couple of months ago.  However, you went much deeper than I did.

On Tue, Nov 14, 2017 at 08:22:26AM +1100, Brian May wrote:
> Looks like this vulnerability - at least for the first test case - is
> because we assume that a tiff will only have one transfer function that
> is the same for all pages. However, we read the transfer function for
> every page.
> 
That sounds like a flawed assumption.  The spec (I provide a working
link below) describes the format of a TIFF as being made up of an 8 byte
header and one or more images (IFDs, or image file directories).

The descriptions do not explicitly say that each page can have its own
transfer function, but I cannot see how it would be possible to require
that a single transfer function be applied to all pages in a TIFF in
every case (assuming one is present in the first place).  Also, since
the transfer function cannot fit in the TIFF header, I have to assume
that it must be a per-page field.

Though, I can see how sometimes it would be possible to apply a single
transfer function to all pages in a TIFF.  For example, if all the
images came from the same device and were taken in close succession, or
something like that.

> Depending on the transfer function, we allocate either 2 or 4 bytes to
> the XREF buffer. We allocate this memory after we read in the transfer
> function for the page.
> 
> For the first exploit - POC1, this file has 3 pages. For the first page
> we allocate 2 extra extra XREF entries. Then for the next page 2 more
> entries. Then for the last page the transfer function changes and we
> allocate 4 more entries.
> 
> When we read the file into memory, we assume we have 4 bytes extra for
> each and every page (as per the last transfer function we read). Which
> is not correct, we only have 2 bytes extra for the first 2 pages. As a
> result, we end up writing past the end of the buffer.
> 
> I haven't yet looked at the other exploits in detail, however I suspect
> they might be all variations on the same vulnerability.
> 
> Unfortunately, having trouble finding tiff specifications. At looks like
> I need to be an Adobe partner to access the specifications at
> <https://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf>. In
> particular, would like to know if different transfer functions for each
> page are permitted. Regardless, it seems clear the code doesn't support
> it. It seems to assume the transfer function will be the same for every
> page.
> 
The specification is available from the ITU and also the Library of
Congress (which in turn links to the Wayback Machine):

https://www.itu.int/itudoc/itu-t/com16/tiff-fx/docs/tiff6.pdf
https://www.loc.gov/preservation/digital/formats/fdd/fdd000022.shtml
https://web.archive.org/web/20150503034412/http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf

> Also can't find an upstream BTS, is this a project with a dead upstream?
> http://www.libtiff.org/bugs.html contains dead links.
> 
That link is outdated.  I am curious where you found that link.  The
debian/control lists a current URL.

Upstream can be found here now:

http://libtiff.maptools.org/
http://libtiff.maptools.org/bugs.html
http://libtiff.maptools.org/support.html

> Suggested solutions:
> 
> * I could allocate 4 bytes for every page, regardless of the transfer
>   function. This wouldn't necessarily give correct results, but would
>   presumably solve the security issue. This is probably the simplist
>   solution.
> * I could allocate the memory required after it scans all pages, and we
>   know what the final transfer function is. Again, may not necessarily
>   give correct results, but would presumably solve the security
>   issue. Probably not really worth it over the previous solution
>   for a maximum potential savings of 2 bytes per page.
> * I could alter the code to abort with an error if the transfer function
>   changes to have different (or maybe just increased) memory requirements.

Of these I dislike the third option the least.  The first two have the
potential to fail silently or to just give subtly incorrect results.  I
think that failing noisily with an error explaining why the failure
occurred is less bad than silently giving subtly wrong results.

Regards,

-Roberto

-- 
Roberto C. Sánchez


Reply to: