[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#27906: PROPOSED] Binary-only NMU's



> This needs to be fixed, then. Unless we can guarantee that the same
> version of the same package will always work on all architectures,
> we need to be able to have differing source versions simultaneously
> while portability issues are sorted out.

I think Paul meant something different: If the maintainer uploads a
new source version together with an i386 binary, the binary version of
all other archs do not match the source in the FTP archive for the
time until the package has been recompiled on all other archs...

If you want to fix this by keeping several source versions available,
dinstall would have to check first all binary-* directories which
source versions are still needed on any installation...

> This is very interesting - thanks for sharing it with us. Perhaps
> some of the autorecompilation on i386 could be beefed up to make the
> compilation work at least there.

All the porters report such errors to the maintainers... But our
failure ratio would be much less if source maintainers would watch out
for these things in first place... what about some lintian checks
against such common stuff, and requirement to build once from a
freshly unpacked source tree? :-)

> It is, however, the way _we_ distribute the source code. We are
> required to distribute the source code somehow, and if you think
> that paragraph doesn't apply, then I'd like you to point out where
> we're distributing it at all.

Can't we redefine the way we distribute the source code? Instead of
"only on the FTP site" to "most times on the FTP site, plus in rare
cases additionally a patch in the BTS".

> I'm sorry to say `tough' here, but the only way to reliably produce
> source and binary packages that correspond to each other and that
> work is to build from a clean source tree.

Basically you're right, but I hope I know what I'm doing if I'm doing
such stuff... You would also become inventive if it could save you,
let's say, 10 hours of work :-)

> Recompiling on other architectures ensures that the
> `per-architecture patch' which is supposed to fix compilation on one
> architecture doesn't break others.

But does that really require recompilation of the NMU? The NMU patch
will be integrated soon (hopefully...) in a proper version, and then
the packages needs to be recompiled anyway again.

> If you feel it appropriate, yes. If this is currently not policy
> then perhaps we should make it so.

Current policy is to ask the maintainer first and wait some time with
the NMU, except if it's really urgent (security stuff or so). But this
is often too long, which is one reason why we make bin-only NMUs.

Roman


Reply to: