[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Reaction to potential PGP schism



hey folks--

[ This message won't make sense unless the reader distinguishes clearly
  between OpenPGP the protocol and GnuPG the implementation! As a
  community we have a history of fuzzily conflating the two terms, which
  is one of the reasons that we're in this mess today.  Please read
  explicitly. ]

[ Background: for those who don't know, i've been a maintainer in debian
  of GnuPG and other OpenPGP-related tooling for several years, and i'm
  also the co-chair of the IETF's OpenPGP working group; i participated
  in many of the discussions that led to the current sorry situation,
  and it is happening despite my best efforts to avoid this problem.
  I'm probably as responsible for this situation as anyone in Debian
  is. My apologies. ]

The best outcome, in my opinion, would be for GnuPG to go ahead and
implement the pending updated OpenPGP specification (the so-called
"crypto-refresh"). I say this despite personally preferring some of the
concrete ways that i think the GnuPG project would have preferred to (as
indicated by the latest "LibrePGP" Internet-Draft, at least) diverge
from the OpenPGP specification.  There are enough other advantages to
the OpenPGP crypto-refresh that it doesn't make sense for GnuPG to
deliberately avoid implementing the community consensus. The GnuPG
project clearly has all the underlying cryptographic and engineering
capability to do this, if it wants to, and the OpenPGP crypto-refresh
process took deliberate measures to avoid collisions with any
prematurely deployed code that implements a draft that hadn't managed to
reach a rough consensus.

Can debian make GnuPG interoperate with the rest of the OpenPGP
ecosystem?  Probably not without GnuPG's cooperation: it would be a
substantial patchset to carry in Debian, and even trickier to do if
GnuPG upstream sees such a patchset as hostile.

Read on below if you want to consider some other options.

Stephan Verbücheln wrote:
> As you probably know, Debian relies heavily on GnuPG for various
> purposes, including:
> - developer communication
> - signing of tarballs and patches
> - automated processes such as update validation by APT

Debian by policy and by mechanism relies heavily on the OpenPGP protocol
for these things.  And i'd also add certificate verification, aka "web
of trust" for Debian developer identities to the list as well.

In particular, we use OpenPGP for cryptographic signing of software
source, packaging information, archive control, and distribution
mechanisms; for developer identities; and for cryptographic verification
of all of these things.  As a project, we don't make much use of the
encryption/decryption parts of OpenPGP, since we develop mainly in the
open.  But not everyone uses GnuPG for these purposes.  There are
multiple interoperable OpenPGP implementations in Debian beyond the
GnuPG family (C), including RNP (C/C++), pgpainless (java), pgpy
(Python), GOpenPGP (Go), hOpenPGP (haskell), and Sequoia (Rust).

But it is also true that the GnuPG implementation specifically is baked
into some of our infrastructure.  I'll get into why that is below (see
"Why is GnuPG on Debian's Critical Path?").

> How can Debian deal with this? Should Debian intervene to prevent the
> worst?

I don't think Debian can make a specific intervention that will avoid
the global problem, but i think there are things we can consider going
forward.

One possible approach is to drop the use of OpenPGP (or "LibrePGP")
entirely, and instead base our internal cryptographic dependencies on
bespoke cryptographic implementations.

I think that would be a mistake.

I do not want Debian's long-term health to depend on any particular
implementation.  If the implementation fails then we would have to (as a
project) decide on our own upgrade path.  For a failure due to
cryptanalytic advances, that can be particularly harrowing: I don't
think we as a project have the necessary expertise to do that well.  For
failures due to buggy implementations, we can always patch, but i wonder
about the amount of cryptanalytic review a bespoke implementation will
have as opposed to publicly audited generic tooling.

If we have to decide as a project on LibrePGP vs. OpenPGP, i'd prefer
the wider community project with a stable reference, functioning (albeit
sometimes rough) consensus, a range of diverse implementations, and
substantial public interoperability testing.  That means OpenPGP.

To be clear, the IETF OpenPGP working group actively solicited input
from the GnuPG team, and tried to work with the project as one
significant implementation among many.  But ultimately, the GnuPG
project decided to break away from the community process, and created
this "LibrePGP" split, which threatens interoperability for the *PGP
ecosystem as a whole.  Maybe the end result of this will be to put a
nail in *PGP's coffin, and we'll all just go back to bespoke
cryptographic implementations that have no stable reference other than
the source code, and little interoperability or reusability across
contexts.  Or maybe we should switch to X.509 for certificates and CMS
as the cryptographic object model (though that ecosystem is even more of
a fragmented disaster than OpenPGP, from what i've seen).  I'd be sad
about any of these outcomes though.

Using minimally patched, highly visible, publicly vetted tooling with
simple and straightforward semantics, where there are multiple
available, active implementations seems like the safest approach.  The
OpenPGP protocol, as designed, is intended to offer a smooth upgrade
path to new cryptographic algorithms and features, while keeping a
reasonably stable, minimal set of semantics as the underlying
cryptographic algorithms improve.

In the past, that hasn't worked out as well as i would have liked for
us, largely because of our dependence on GnuPG as an overwhelmingly
dominant implementation, which didn't test the upgrade path as well as
it should have.  For example, the transition from RSA to Ed25519 was a
rough one, since older versions of GnuPG choked on certificates that
contained unknown algorithms rather than ignoring them gracefully.

OpenPGP implementations have generally learned from those failures, and
many of them are now much more resilient and can support the kinds of
upgrade path that we need to consider.  For most of our
signing/verifying-focused work, that means:

 - verifying tools should ignore signatures and certificates that they
   don't understand, while still validating signatures from certificates
   that they do understand

 - signing tools can make pairs of signatures, one "compatibility"
   signature and one "modern" signature

This means that for a debian signing/verification context, like package
distribution, which has a global workflow, starting from an existing
OpenPGP implementation, signing key and corresponding verification
certificate, it looks like:

 0) upgrade the signing tool, and start upgrading some of the
 verification tooling.

 1) create a new signing certificate with the new version, algorithm, or feature.

 2) distribute the old+new certificates for the verifiers.

 3) make signatures with old+new in parallel

 4) complete upgrade of all verification tooling

 5) stop making signatures with old signing certificates

These steps map reasonably well into the Debian release cycle.  Any part
of the debian ecosystem that requires these sorts of semantics should be
able to just adopt a generic OpenPGP implementation and make this work.

In practice, i think it makes the most sense to engage with
well-documented, community-reviewed, interoperably-tested standards, and
the implementations that try to follow them.  From my vantage point,
that looks like the OpenPGP projects that have continued to actively
engage in the IETF process, and have put in work to improve their
interoperability on the most sophisticated suite of OpenPGP tests that
we have (https://tests.sequoia-pgp.org/, maintained by the Sequoia
project for the community's benefit).  Projects that work in that way
are also likely to benefit from smoother upgrades to upcoming work in
the IETF like post-quantum cryptographic schemes:

    https://datatracker.ietf.org/doc/draft-wussler-openpgp-pqc/

Is GnuPG (or "LibrePGP") a good choice for this kind of long-term
ecosystem health?  I want it to be, but I worry that it currently is
not.  In particular, "LibrePGP" doesn't appear to have a stable
reference, or even multiple implementations that agree on the semantics
of all the major parts of it.  And GnuPG might be too complex or too
fragile to rely on in the long term, particularly if it doesn't intend
to support interoperation with other OpenPGP implementations, which i
understand to be the currently stated intent of the project.

I've written to the pkg-gnupg-maint mailing list about my concerns about
the long-term health of the GnuPG project, and why i've found it
difficult going as a downstream maintainer in the last several years:

  https://alioth-lists.debian.net/pipermail/pkg-gnupg-maint/2023-August/008968.html

We've had to carry lots of patches in debian (e.g. cryptographic default
updates, performance or hardening patches), and not had a lot of support
from upstream for healthy integration in common debian or Internet
infrastructure (e.g. systemd support for the increasingly daemonized
family of tools, or certificate cryptographic updates or revocations
that ship without user IDs).  There are also significant performance
problems when the tool tries to scale up (e.g. large numbers of
concurrent secret key operations, many successive signature
verifications/decryptions, or large keyrings).  These might not affect
everyone directly (indeed, GnuPG upstream has in the past discouraged
users from trying to scale up, saying that the tool is designed for
selective and deliberate use), but if your tooling depends on GnuPG then
you're asking those few users who do try to scale up to switch
toolchains entirely.  That's a shame, and likely a source of bugs or
even cryptographic errors.

I've been trying to encourage core pieces of infrastructure to adopt
"OpenPGP" as a technology, rather than requiring "GnuPG" as an
implementation, even if the underlying tooling remains GnuPG.  This is
mostly other people's work, but i'm very pleased to see, for example,
that dpkg now has a more flexible "openpgp backend" mechanism.

# Why is GnuPG on Debian's Critical Path?

GnuPG has been a longstanding, stalwart presence in the F/LOSS
community.  It provided cryptographic integrity checks for our ecosystem
for ages, long before many proprietary systems even considered defending
against some of the attacks that OpenPGP can defend against.  That said,
the GnuPG codebase is difficult to maintain, the semantics its
interfaces offer are ill-understood, some bugs can be difficult to get
fixed, and system integration is tricky.

In 2023, I believe GnuPG is baked into our infrastructure largely due to
that project's idiosyncratic interface.  It is challenging even for a
sophisticated engineer to figure out how to get GnuPG to (probably,
hopefully!) fulfill a cryptographic task in their project.  Once that is
done, it's especially painful to consider moving to a different OpenPGP
implementation, because the interface to another implementation rarely
lines up cleanly with GnuPG's interface.

This is not unlike the situation we've had in the past with the TLS
protocol (Transport Layer Security), where one widely-used
implementation (OpenSSL) has such a historically peculiar interface that
it was a real challenge to port a dependent project from that
implementation to another implementation and still remain functional,
even when the desired cryptographic goals of the dependent project are
relatively straightforward, as they usually are (for an implementation
that wants to act as a TLS client, the desired semantics are typically:
"instead of a cleartext bidirectional stream to foo.example, i want an
encrypted, authenticated, integrity-checked bidirectional stream to
foo.example").  Arguably, we're still beholden to OpenSSL in many ways,
despite better TLS implementations being available in many contexts.

# What Can Debian Do About This?

I've attempted to chart one possible path out of part of this situation
by proposing a minimized, simplified interface to some common baseline
OpenPGP semantics -- in particular, the "Stateless OpenPGP" interface,
or "sop", as documented here:

   https://datatracker.ietf.org/doc/draft-dkg-openpgp-stateless-cli/

We have a few implementations of this interface in debian already:
pgpainless-cli, sqop, and gosop, plus a partial implementation in
haskell (/usr/bin/hop, from hopenpgp-tools).  sop is a command-line
interface, but there has been some (still immature) discussion about a
C-based library interface as well.  I would absolutely love to see the
GnuPG project produce a native sop implementation, but the only one i've
seen is built by an outsider around the GpgME library (itself a delicate
interface), and hasn't been formally released.

If your part of Debian's infrastructure depends on GnuPG, consider
making it depend on a sop implementation instead, so we don't end up
stuck on a single OpenPGP implementation in the future.  If the sop
semantics are insufficient for your purposes, please report your needs
at https://gitlab.com/dkg/openpgp-stateless-cli !

If you are implementing or maintaining an OpenPGP implementation in
debian, please consider encouraging upsteam to add a sop frontend, and
get it tested in the interop test suite!

# The Bottom Line

Interoperability and engagement with the broader ecosystem matters,
especially in a store-and-forward ecosystem like OpenPGP.  I'd love to
see a historically important implementation like GnuPG collaborate with
the wider ecosystem.  I don't know how to convince them to do that.  I
think it would be irresponsible of debian to endorse (by adoption) a
cryptographic project that has chosen to opt out of public participation
and consensus building for this critical bit of infrastructure, even if
the public consensus isn't exactly what they wanted.

Hopefully

        --dkg

PS For standards nerds who are interested in yet another TLS analogy: it
   would be a mistake for Debian to adopt [ETS] for its infrastructure
   instead of [TLS 1.3]:

[ETS] https://www.etsi.org/deliver/etsi_ts/103500_103599/10352303/01.01.01_60/ts_10352303v010101p.pdf#page=8

[TLS 1.3] https://datatracker.ietf.org/doc/html/rfc8446

   For the avoidance of doubt, i see LibrePGP as the ETS analogue, and
   the pending OpenPGP crypto-refresh as the TLS 1.3 analogue here.

   This situation isn't quite as bad as that one: i don't believe the
   "LibrePGP" schism is directly trying to sabotage a community security
   goal, the way that ETS is trying to sabotage TLS 1.3's forward
   secrecy.  But in some ways it's worse: ETS is at least a stable
   standard that multiple implementers can interop against, whereas
   LibrePGP appears to be shifting based on one implementor's unvetted
   decisions (i've seen very little discussion of LibrePGP from the RNP
   folks, who are the other claimed participant in LibrePGP).  The
   reason that we have community consensus and review of standards is
   specifically to ensure interoperability and avoid cryptographic
   failures.

Attachment: signature.asc
Description: PGP signature


Reply to: