Re: [RFC] General Resolution to deploy tag2upload
On 17261 March 1977, Russ Allbery wrote:
FTPMaster *is* in support of t2u, if it ends up in a way that allows
dak
doing the final verification/authorization of the upload, NOT needing
to
trust some other instance.
Why is this your red line? Is it only that you don't want to add
another
system to the trusted set, or is there something more specific that
you're
concerned about?
There ought to be one point that is doing this step, not many, yes.
Includes that it is the delegated work and task description of FTPMaster
to do this, though that can be addressed by either us ending up running
it, or adjusting delegations. Not sure the latter ends up with happy
people, but is one existing way.
But even if we run it, we still think its a loss, with the current
design proposal.
Also, currently we have the nicety that we store all signatures directly
besides the source package, available for everyone to go and check.
Linking back to the actual Uploader, not to a random service key. You
can take that, run a gpgv on it and via the checksums of the files then
see that, sure, this is the code that the maintainer took and uploaded.
You do *not* need to trust any other random key on that. Not that of
tag2upload. *AND* not that of FTPMaster.
As generating changes and dsc on the maintainer side is out (we want
a
git $something workflow now), that verification ought to be over the
content. So whatever tool the maintainer ends up calling ought to
generate a signature over the content of the package and put that
into
git (a tag, or whatever t2u uses).
I want to talk about designs from the perspective of threat models and
constraints, so I'm going to try to reverse engineer those from your
proposed solution so that we can have a more structured discussion of
the
security properties. Please check this and make sure that I've
correctly
captured your thought process here:
The threat that you are trying to protect against is a compromise of
the
tag2upload server. I think you're trying to find a design that meets
the
following constraints:
1. You want there to be some external check on the tag2upload server
to
ensure that it correctly constructed a source package from the
uploader-signed artifact.
2. You (correctly, in my opinion) do not want dak to perform the
construction of the source package from a Git tag, so you are
looking
for some other agent in the system to serve as the check on the
tag2upload server.
Is that correct?
We want dak (and anyone else) to be able to say "Yes, DD/DM $x has
signed off this content". That only works, if dak (and later, the
public, if they want to check too) have the signature for this in a way
they can verify it. And not just a line somewhere "Sure, $service
checked this for you, trust us, please".
Yes, 2. is definitely correct. This ought to be a separate thing to run
on an own host.
If that's right, it sounds like your solution is to push that
verification
work to the uploader. I know you're trying to avoid specifics, but
let me
make this slightly more specific so that I have something concrete to
talk
about. If I understand correctly, a design that you would approve
would
look something like this:
Unsure those are the right words. We want to have the uploader create a
signature over the content they want to have appear in the archive. In a
way, that this signature can be taken and placed beside the source, and
then independently verified. *Currently* this is done using .dsc files.
1. The uploader performs the work to transform a Git tree into an
unpacked
source package and calculates a Merkle tree hash of that unpacked
source package or something equivalent.
2. The uploader creates a signed Git tag over the corresponding Git
tree
in the same way as in the tag2upload design but additionally
includes
the Merkle tree hash in the signed data.
3. tag2upload functions in the same way as designed, starting from the
Git
tag, constructing the source package, and passing it to dak. It
additionally conveys the signed Git tag object to dak in some form,
such as a separate file.
4. dak verifies the Git tag signature, performs normal authorization
checks against it, unpacks the source package, calculates the same
Merkle tree hash, and ensures that the hash matches the one in the
Git
tag.
Am I correct that this is the type of design that you are asking for
and
that you would approve this design, modulo the normal sort of details
that
would need to be hashed out?
It very much sounds like going the right way, yes.
I basically assume that the uploader *does* need to have their source
locally, no matter what. (Their git cloned). Or they wouldn't be able to
work on it and package it. I also do assume that the uploader will build
things, to see if the stuff they are going to "push to the archive" (and
our users) actually does what they intended it to do - and to test it.
So the whole machinery to create a buildable source out of
whichever-of-the-git-layouts they chose for their package needs to be
around anyways, and needs to run locally.
And from that follows, for me, that the machine of the uploader ought to
be able to do enough work to end up with something that can be used to
create a signature over, which later can be used to independently verify
that the uploaded contents match what the uploader signed.
*Currently* this would be the .dsc file being signed including the
checksums for the source files. In the future this can be anything, as
long as it doesn't rely on a third-party to do magic for you to be able
to verify it. git ls-files on two branches signed. git archive
$commit|checksumtool signed. Whatever.
If this is the case, then I think it's incorrect to say that the
tag2upload maintainers have ignored this feedback. I can't speak for
them, obviously, but I believe I've seen them answer essentially this
feedback multiple times. My understanding is that the problem with
this
design from their perspective is that it requires a fat client on the
uploader's system, and whole point of tag2upload is to stop requiring
a
fat client on the uploader's system. In particular, it requires all
the
code to reconstruct the source package from a Git tree be installed
locally, which is basically a full dgit implementation.
From rereading the 2019 thread a bit, the argument *seems* to be that
this requires Debian specific tools and something locally doing work,
yes. The Debian specific knowledge here is wanted to go away.
This is a real trade off about which we can disagree! This is a
useful
thing for us to argue about and vote about. I agree that the design
that
you propose is somewhat more secure in that it adds a check on the
security of the tag2upload server that would catch some classes of
compromise, although I believe I have a substantial caveat to your
analysis that I'll talk about more below. But it's a trade off, like
most
things in security: the cost is that it's still not possible to upload
a
Debian package via a signed Git tag with some metadata that one can
manually construct if one wishes. A Debian uploader still has to have
a
Debian-specific program installed locally that does a bunch of complex
transformations of a Git tree before they can trigger an upload.
Ah well, working on things for Debian requiring something Debian
specific does not sound bad. If i work on $domain stuff, I do have to
have $domain specific tools and knowledge. Be that Debian, Fedora, Rust,
Perl or whatever else.
I do think it is possible, even with what we want, to upload using a
signed git tag. The metadata is a bit more than what is currently
proposed, creating it might need more than a plain stupid bash, but that
doesn't appear bad to us.
Also, the level of complexity comes from us having a trillion of
different ways. If we want to make it easy, we could say "There is one
upstream branch, plain upstream source is in there. One debian branch,
debian/ goes into that. You may append -$debianrelease to the branch
name to support our different releases. Done". And the local client for
t2u suddenly is way simpler. Entirely optional to switch to, if you want
to use git pushes for uploading, use it. If not, use whatever other way
you want and are used to and continue life as you know it.
If the disagreement is over whether that user interface property is
worth
the security trade off, then that's a concrete thing that we can argue
about, but I want to make sure that this fully captures your
objection.
That then allows dak to do what it does now and trust the thing
originates from the maintainer.
I think this is probably my strongest point of disagreement with your
analysis. I think you're putting more weight on this idea of
maintainer
intent than it can actually support, and I think your analysis of
maintainer intent is somewhat incorrect.
It sounds like you are assuming that the maintainer has vetted the
thing
that they sign. I am extremely dubious that this is the case. I
believe
that the typical maintainer workflow today is that the maintainer
works on
the package in a working directory (usually but not always in Git)
until
they are happy with the results. Then they run a build tool that
generates a source package, and they blindly sign and upload that
source
package. They do not verify that the resulting source package matches
their intent in their working tree apart from building binary packages
based on it and running them.
In other words, the intent that the maintainer who uses Git is trying
to
express is "upload something corresponding to this Git tree and this
upstream orig tarball to the archive." By asking for the signature to
be
over the source package instead of over the Git tree, we are already
diluting maintainer intent. The thing the maintainer signs is not the
source code of the package; that's the Git tree. It's a build product
of
the source code.
In that sense, the signature verification that the tag2upload server
does
is *closer* to actual maintainer intent than a signature verification
on
the *.dsc file. We're diluting maintainer intent by moving to the
source
package.
I'm not sure how much I trust anyone actually checking everything to the
end, yes. (Including myself).
Still, we should find a way to keep the existing property of verifying
what the uploader signed to upload *without* requiring a third-party
$something to be available.
That's one of my objections. My other objection is that I think that
the
uploader's system is already the weakest link in our current security
model. Relying on it for additional security properties is something
that
we're currently doing, and having the uploader's system redundantly
check
the tag2upload server does have some security benefit, but I think
that
security benefit is substantially less than the benefit of, say, a
reproducible source package build server in a separate security domain
but
with a similar secured architecture rather than whatever state the
uploader's system is in.
Well, if the maintainers system is broken in, it makes no difference if
a git tag or a dsc or whatever else is signed. It all can end up
modified by the attacker. I do not think that tag2upload or any other
tool will provide more security, if a maintainer gets their system
hacked.
In other words, if the goal is to create a redudant check on the
tag2upload server, doing that via something the uploader signs is not
clearly better (and I think arguably worse) than having two tag2upload
servers in separate security domains that perform the same operations.
In
both cases you're still trusting the same code to perform the source
package transformation, but the tag2upload server has a better
security
model than the uploader's local system.
The uploader will sign something either way.
--
bye, Joerg
Reply to: