[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Thoughts on APT architecture and hardening



On Wed, Jan 23, 2019 at 10:58:15AM +0100, Yves-Alexis Perez wrote:
> Hi,
> 
> following the release of fix for CVE-2019-3462, there are few things I'm
> wondering about apt architecture. I'm leaving aside the http/https debate
> (which I think we need to have for Buster, though), but here are my though
> (especially in light of Max Justicz blog post at 
> https://justi.cz/security/2019/01/22/apt-rce.html)
> 
> I didn't really know about apt architecture and the fact that fetchers are
> forked. I think it's a good idea to isolate exposed workers dealing with
> untrusted data (especially HTTP), but apt main process seems to trust data
> coming from the workers. I'm unsure where is the boundary trust here, but if
> the fetchers data is trusted, I guess workers shouldn't just copy content from
> outside to inside but do a real (/complex) sanitation job before handling it
> to apt?

I'm about to make apt workers check that they do not send any control
characters in their field names or values. This should help.

We should probably also parse the message and ensure that there are no
control characters in there either, on the root side.

> 
> As I understand it, the file hashes are calculated (or in this case, injected
> from outside) by the worker, and not by the apt main process. Is that a good
> idea?

No, it's not a good idea. We will do some more hardening:

(1) The code fetching .deb files will send them to the store method to recalculate
    the hashes. This ensures that we cannot do any fancy protocol injection.

    This does not prevent a rogue http method from replacing the file after store
    verified it, though, as they are owned by _apt, under which both methods run
    (and even if we mv it, it does not work).

(2) I'd like to get to the point where we send methods a pipe to write to over
    a socket. The data then gets written to the pipe, the pipe end writes the file
    to the partial directory, calculates the hashes and sends them back to the main
    process.

    There are two issues here at the moment:

      (1) the http method needs to be able to truncate files
      (2) the http method needs to be able to fix up pipeline mismatches, e.g.
          foo.deb actually contains bar.deb by renaming things.

 
> Finally, we're again bitten by GPG drawbacks: RCE is really possible here
> because gpg won't actually complain when the release file is actually also
> something else. Validating the release file format might be a good idea by
> itself, but it'd be nice (though out of scope for deity@) if the signature
> scheme wouldn't allow such things to happen.

I think you can still inject any deb as long as you download at least 2, so
you inject the 2nd response into the first or something. But I'm not sure, but
while I do think blaming gpg is always correct (it's always involved in any of
the past CVEs essentially), it might not be the only attack vector.

I'd really love it if we could have a signature verification library that can
properly handle clearsigned files and detached signatures, and ensure things
like that the clearsigned file only has signed content, and that the signatures
do not house any other content.

The last vulnerability was caused by us having to split clearsigned files into
detached signatures and signed content to work around gpg, and this one is made
trivial by gpg allowing any garbage in releases. This stuff really has to stop.

Makes you wonder if you don't want to move to PKCS11 certificates or something,
and use gnutls.

-- 
debian developer - deb.li/jak | jak-linux.org - free software dev
ubuntu core developer                              i speak de, en

Attachment: signature.asc
Description: PGP signature


Reply to: