[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Package dependency versions and consistency



On Mon, Dec 28, 2020 at 03:20:35PM +0200, Adrian Bunk wrote:
> On Sat, Dec 26, 2020 at 02:55:17PM -0800, Josh Triplett wrote:
> >...
> > If you want to package abc version 1.2.3, and among many other things,
> > abc depends on xyz version 2.1.4, and xyz has a new version 3.0.1 now,
> > it makes sense to work with the upstream of abc, sending them a patch to
> > migrate to the new version, and waiting for abc 1.2.4 to come out with
> > that update. It *doesn't* make sense to maintain a downstream Debian
> > patch to make abc work with the newer xyz.
> 
> Maintaining a backported patch is usually very cheap,

1) Writing, testing, and maintaining 200 backported patches in order to
   upload a package is not. And on average, doing 200 instances of
   something that is "usually" easy means encountering several
   exceptions that are not easy. Doing 200 instances of a task that's
   *usually* easy also makes mistakes more likely.
2) There's not enough benefit to the patch to carry it downstream. This
   is part of the point of this thread: allow transitions to happen in
   the archive and in concert with upstream, rather than before first
   upload or via Debian-specific changes.
3) Such a patch would require further analysis to determine if other
   changes need to happen in concert to avoid breakage. If abc exposes
   any types from xyz, it may need a major version bump as well; this
   isn't common, but anyone writing such a patch would need to check.
   Furthermore, much more commonly, moving to xyz 3.0.1 requires
   checking if some other dependency uses a different version of xyz and
   expects to interoperate (e.g. passing an xyz::Foo to
   xyz_helper::takes_a_foo won't work if you upgrade your xyz but not
   xyz_helper's xyz, which would typically involve upgrading xyz_helper
   as well).

Note that all of this only applies when talking about a major version
change. For a minor version change, it suffices to simply rebuild abc
once a new version of xyz is uploaded, and abc will pick up the new
version of xyz.

It might also make sense to work with the Semantic Versioning standard
and ecosystems using semver to create a mechanism for specifying a
potentially unbounded number of "downstream revision" numbers, which
would make it substantially safer to make changes downstream when
absolutely necessary. That still doesn't mean we should do so at every
possible opportunity, only when there's substantial benefit to doing so
that outweighs the drawbacks of doing so. The right place for fixes is
*always* upstream; Debian-specific patches are always technical debt.

> > abc can just build-depend on
> > xyz-2, and a later version of abc can build-depend on xyz-3. That isn't
> > a reflection of complexity in xyz, or in abc.
> 
> It is usually a reflection of very poor API design in xyz.

No, it isn't. It's a reflection that APIs do not become utterly
immutable when published, and that it's acceptable to fix and evolve
designs rather than working around them at all costs.  If you want the
old API, the old version still exists. If you need a targeted bugfix in
the old version, it's possible to publish a new micro-version of the
package containing just that bugfix. This is not a slippery slope, where
major versions either never change, or change daily, or cause the amount
of pain Python 3 did when they change. There are far more points in the
spectrum than that.

More to the point, Debian does not control upstream, making this entire
line of discussion moot. Upstream packages in ecosystems that use semver
will continue to do so, and some of them will *occasionally* bump major
versions. It doesn't help to have an argument over whether Debian is
always right or other ecosystems are always right or (as is typically
the case) there exists nuance here. That argument won't turn out any
differently than all the previous iterations of that argument, and at
the end of the day, the problem of how to handle packaging will be no
closer to a solution.

You don't have to work on that solution. But the point of this thread is
to seek solutions, not to complain about how it'd be easier if new
software acted more like existing software so we didn't have to develop
new ways to handle the new software.

> >...
> > By contrast with that, security support may not be nearly as much of an
> > issue. The *majority* of libraries in Debian don't require any security
> > updates at all.
> 
> My basic assumption would be that any code that might handle untrusted 
> input is only one security audit away from a CVE.

My statement stands: the vast majority of libraries in Debian don't have
any security updates.

Also, let's make it easier to package code written in languages where
that's less of a problem, where every single piece of code handling a
string or a memory allocation isn't a security bug waiting to happen.

> >...
> > I'm not talking about packaging xyz 1.2.3, 1.2.4, 1.3.1, and 2.0.1. When
> > xyz 1.3.1 is uploaded, it can safely replace 1.2.4,
> 
> In stable this is not safe.

I was talking about packaging in unstable. In stable, I would expect a
security update to come in the form of a micro version bump (for
instance, from 1.2.4 to 1.2.5). Or, if we improve the semver standard,
perhaps that could be 1.2.4(something)1 for some standardized
(something).

> Security updates get automatically installed to production environments
> and deployed devices of users.
> 
> If xyz 1.2.4 in stable has a CVE that is fixed in 1.3.1, to minimize
> the risk of regressions the usual approach is to do the minimal fix
> of applying the CVE fix only to 1.2.4.

I'm aware of how Debian stable is used and how it currently works.

> > and packages using xyz 1.2.4 can get rebuilt via binNMU if needed.
> 
> This is also a huge problem.

It's not the subject of this thread.

> The way a distribution like Debian works, I do not see how security 
> support would be possible for a static-only ecosystem with a non-trivial 
> number of packages.

Cheaper binNMU-style rebuilds, and better incremental downloads. This is
a solvable problem, once you start from the perspective that it requires
a solution.

- Josh Triplett


Reply to: