[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: releasing major library change to unstable without coordination



Hi Sandro!

* Sandro Tosi <morph@debian.org> [2021-12-22 19:24]:
there's also a problem of resources: let's take the example of numpy,
which has 500+ rdeps. am i expected to:

* rebuild all its reverse dependencies with the new version
* evaluate which packages failed, and if that failures is due to the
new version of numpy or an already existing/independent cause
* provide fixes that are compatible with the current version and the
new one (because we cant break what we currently have and we need to
prepare for the new version)
* wait for all of the packages with issues to have applied the patch
and been uploaded to unstable
* finally upload to unstable the new version of numpy

?

that's unreasonably long, time consuming and work-intensive for several reason
That's true. However, I think it is reasonable to expect a
maintainer to
* look at the release notes for documented API breakage,
* rebuild a few reverse dependencies (ideally the ones which
  exercise the most functionality, but a random pick is probably
  fine, too),
* file bugs if you find any issues, and
* monitor the PTS and check for autopkgtest failures, so you can
  help figure out (or even fix) what broke.

Personally, I also like to run something like
`git diff upstream/<old> upstream/<new> -- '*.h'` or `git diff upstream/<old> upstream/<new< -- '*.py'` to get an idea
how much has changed, and if I find breakage (either through
inspection or rebuilding), look for other usage of the broken API
with sources.debian.org.

Maybe it's just lazy on my part, but there needs to be a cutoff
between making changes/progress and dealing with the consequences, and
walking on eggshells every time there's a new upstream release (or
even a patch!) and you need to upload a new pkg.

i choose making progress
I believe if you are maintainer of an important package with many
reverse dependencies, you should spend more time to avoid breakage
because you have a huge lever effect. For instance, if you can cut
corners to save 10 hours of work, but 100 other DDs will need to
spend 30 minutes each to fix the breakage as a result, it is still a
bad tradeoff.

OTOH, as a maintainer of an unpopular leaf package, I can get away
with atrocious uploads because nobody but me will notice or care.


Cheers
Timo


--
⢀⣴⠾⠻⢶⣦⠀   ╭────────────────────────────────────────────────────╮
⣾⠁⢠⠒⠀⣿⡁   │ Timo Röhling                                       │
⢿⡄⠘⠷⠚⠋⠀   │ 9B03 EBB9 8300 DF97 C2B1  23BF CC8C 6BDD 1403 F4CA │
⠈⠳⣄⠀⠀⠀⠀   ╰────────────────────────────────────────────────────╯

Attachment: signature.asc
Description: PGP signature


Reply to: