[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: unsigned repositories



On Mon, 05 Aug 2019 at 10:09:09 +0200, David Kalnischkies wrote:
> So far all usecases mentioned here seem to be local repositories
> though. Nobody seems to be pulling unsigned repositories over the
> network [for good reasons].

On CI systems at work, I've often found it to be useful to use
[trusted=yes] over https: relying on CA-cartel-signed https certificates
has fewer security guarantees than a signed repository, but is a lot
easier to set up for experiments like "use the derivative's official apt
repository, but overlay smcv's latest test packages, so we can test the
upgrade path before putting them into production".

I also currently use [trusted=yes] over the virtual network link between
the host system and a test/build VM, as a way to satisfy dependencies that
are not satisfiable in the target suite yet (testing against packages that
are stuck in NEW or not uploaded yet) or have been selectively mirrored
from backports or experimental (where pinning would normally prevent
use of backports or experimental packages, unless we use apt-cudf like
the Debian buildds do, or replace apt with aptitude like the -backports
buildds do).

While I *could* use a GPG-signed repository for both of these, that
would require generating GPG keys (draining the system entropy pool
in the process) and installing the public keys as trusted on the test
system, and I'd have to be careful to make sure that generating and using
these test GPG keys doesn't have side-effects on the more important GPG
configuration that I use to sign uploads.

Equally, I *could* make the entire repository available as file:/// in the
VM, but the autopkgtest virtualization interface (which I currently use
for virtualization) doesn't provide direct filesystem access from qemu
VMs to directories on the host (only recursive copying via
tar | ssh | tar), and ideally I don't want to have to copy *everything*
(e.g. there's no need to copy i386 packages when I'm building for amd64
and vice versa).

> The other thing is repositories without a Release file, which seems to
> be something used (legally) by the same class of repositories only, too.

For anything beyond quick experiments I normally use reprepro, so I have
a package pool and an unsigned Release file at least.

However, when the Open Build Service (which we use a lot at work)
exports projects as apt archives, reprepro requires per-repository
configuration (to tie together potentially multiple OBS projects into
one apt repository, and select an appropriate signing key), so the
normal case seems to be that permanent/long-term/sysadmin-controlled
apt repositories use reprepro, but anything short-term or
purely automatic (like personal branches used to test a batch
of changes) uses flat repositories, typically something like
"deb https://build.example.com/home:/smcv:/branches:/mydistro:/1.x:/main:/my-topic-branch ./".
I'll check whether OBS generates an unsigned Release file for those,
or no Release at all.

Because reprepro is designed for a stable, persistent repository that
is accessed by production apt clients, it isn't very happy about having
version numbers "go backwards", having binary packages disappear, etc.,
but tolerating those is often necessary while you are getting prerelease
packages right, especially if not everyone in the team is a Debian
expert. For test-builds that are expected to be superseded multiple
times, have some of their changes rejected at code review, etc., it's a
lot more straightforward to have the repository be recreated anew every
time: yes, this can break apt clients, but if those apt clients are all
throwaway test systems anyway (as they ought to be if you are testing
unreviewed/untested code), then that doesn't actually matter.

> What is it what you need? Sure, a local repository works, but that
> sounds painful and clunky to setup and like a workaround already, so in
> effect you don't like it and we don't like it either, it just happens to
> work so-so for both of us for the time being.

Here are some use-cases, variously from my own Debian contributions, my
contributions to salsa-ci-pipeline and my day job:

* Build a package that has (build-)dependencies in a "NotAutomatic: yes"
  suite (i.e. experimental), and put it through a
  build / autopkgtest / piuparts / etc. pipeline, ideally without needing
  manual selection of the packages that have to be taken from experimental,
  and ideally as close as possible to the behaviour of official experimental
  buildds so that it will not succeed in local testing but FTBFS on the
  official infrastructure.
  (gtk+4.0, which needs graphene from experimental, is a good example.
  sbuild can be configured to use apt-cudf, but autopkgtest and piuparts
  cannot; I need to open wishlist bugs about those. At the moment I use
  a local apt repo with selected packages taken from experimental as
  a workaround.)

* Build a package that has (build-)dependencies in a "NotAutomatic: yes",
  "ButAutomaticUpdates: yes" suite (i.e. -backports), and put it through a
  similar build / autopkgtest / piuparts / etc. pipeline, again ideally
  without needing manual selection of the packages that have to be taken
  from backports, and ideally as close as possible to the behaviour of
  official -backports buildds so that it will not succeed in local testing
  but FTBFS on the official infrastructure.
  (flatpak in stretch-backports is a good example. sbuild can be
  configured to use aptitude, matching the official buildds, but
  autopkgtest and piuparts currently cannot; again, I need to open
  wishlist bugs about those. At the moment I use a local apt repo with
  selected packages taken from -backports as a workaround.)

* Build libfoo, which is not in Debian yet (either not yet ready for
  upload, or stuck in NEW). Put foo, which build-depends on libfoo,
  through a CI pipeline (build / autopkgtest / piuparts / etc.)
  in an as-clean-as-possible environment consisting of the target suite
  plus an "overlay" with the recently-built libfoo, in order to test
  both foo and libfoo and get them ready to upload.
  (This really needs either a local repository of some sort, or an
  equivalent of autopkgtest's ability to add extra .deb files to the
  command-line and get them put in an apt repository on the test system;
  there is no public apt suite that has the necessary packages at all.)

* Build updated or new packages for a Debian derivative. Feed them to
  a CI system that will run autopkgtest or a vaguely piuparts-like upgrade
  test (SteamOS has a rather complicated test to make sure that proposed
  upgrades fit within the limitations of the old version of
  unattended-upgrades in use) without requiring access to the derivative's
  official signing key, or producing packages or repositories that could
  accidentally be accepted by production systems.
  (For the SteamOS unattended-upgrades test in particular, we really need
  this to be a genuine apt repository with a Release file, even if it
  isn't signed, because unattended-upgrades relies on the Release file
  for its pseudo-pinning. We cannot change the pseudo-pinning without
  invalidating the test.)

I hope this helps to illustrate the sorts of things I'm doing.

Thanks,
    smcv


Reply to: