[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: more binnmus to improve the state of armhf testing

On Mon, 2012-02-13 at 19:52 +0100, Julien Cristau wrote:
> On Sun, Feb 12, 2012 at 23:37:34 +0000, peter green wrote:
> > Here are some more binnmus to get binary packages into armhf testing
> > that don't look like they will migrate from unstable any time soon.
> > 
> What makes you think that?

Indeed.  While it's great that you're trying to help improve the state
of the armhf port in testing, this request is a little confusing.

> > nmu transfig hdf5 emacs23 libgtk2-perl audacious audacious-plugins
> > transfig vlc . armhf . testing .  -m 'build for armhf testing'
> hdf5 and friends will migrate soonish.  vlc's already in sync.

and indeed the binNMUs which were scheduled for vlc were then rejected
by dak - not only is the package in sync but it's already at +b1 in
testing on several architectures, including armhf.

> transfig, libgtk2-perl and audacious-* need their bugs fixed.

I'm also not sure why these packages are so important.  The
edos-debcheck output for armhf/wheezy, at least on the server hosting
buildd.d.o, does not show any packages whose installability is blocked
by them.

For core packages or those which are blocking a large number of other
(possibly important) packages, binNMUing in testing makes some sense.
In many cases, if the reason that a package is missing in testing is
because something is blocking the migration from unstable, efforts are
likely better spent fixing those issues.

> The
> netcdf stuff is just crazy.  But now I see somebody scheduled those
> anyway.  *sigh*.

For future reference, one needs to be very careful when requesting /
scheduling binNMUs in suites other than unstable / experimental.
wanna-build's "installed version" field lists the source version, and
the fields indicating that binNMUs have been scheduled don't propagate
across suites.  As noted above, this can lead to situations such as
scheduling +b1 in testing when that version is already in the archive
(or has been at a previous point).



Reply to: