[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting



On Mon, Mar 14, 2005 at 10:54:33AM +0000, Henning Makholm wrote:

> > For these reasons, I think the snapshotting approach is a better option,
> > because it puts the package selection choices directly in the hands of
> > the porters rather than trying to munge the existing testing scripts
> > into something that will make reasonable package selections for you.

> What is "the snapshotting approach"?  I understood the announcement
> such that the lesser architectures are only ever allowed to have a
> single version of each .deb distributed by the project, namely the
> lastest one built at any given time.

It doesn't necessarily mean there's only one binary at a time for the arch;
it does mean that keeping (stable,testing,unstable) binaries+sources around
for architectures that are not being kept in sync is seen as (needlessly?)
taxing on the infrastructure.  It's worth looking for better configurations
that meet porters' needs while making more efficient use of resources.

> I think that would be vastly to the non-benefit of such an
> architecture's users, and I don't see how other architectures
> can be harmed by allowing the lesser architectures to distribute
> whatever .debs they manage to build corresponding to the official
> testing and stable distributions.

There's nothing in place today that would clean up binary packages from
testing when they no longer correspond to the source needed by the release
architectures.  Anything put in place to do this would mean the "testing"
suite for these architectures would have large numbers of uninstallable
packages, and wouldn't resemble testing for the release candidate
architectures at all.

> > First, if they're not being kept in sync, it increases the number of
> > matching source packages that we need to keep around (which, as has
> > been discussed, is already a problem);

> There could be a rule specifying that only versions that _are_ being
> kept in sync can be in the archive, with some reasonable time limit to
> let the arch build the newest version when it migrates to testing.

But removing (automatically or not) binary packages for these architectures
from testing when they expire is still no guarantee that the packages left
behind will be useful, installable, etc.

> > second, if you want to update using the testing scripts, you either have
> > to run a separate copy of britney for each arch (time consuming,
> > resource-intensive)

> But if the arch's porters are willing to do that, why shouldn't they
> be allowed to?

If this is what's needed, then as long as it doesn't interfere with the
release by resource-starving ftp-master, I think it's a fine idea.  It might
mean that it has to go on a separate piece of hardware, in order to avoid
resource-starving the release -- just as SCC will put infrequently
downloaded archs on a separate mirror network to avoid negatively impacting
our ability to get mirrors for FCC archs.  I'm not volunteering to maintain
the hardware for this second testing.

> > third, if failures on non-release archs are not release-critical
> > bugs (which they're not), you don't have any sane measure of
> > bugginess for britney to use in deciding which packages to keep out.

> A lesser architecture's concept of testing could just be, "we're
> trying our best to keep up with the package versions in the official
> testing, regardless of bug counts".

In that case, what value is there in using it as the basis for a stable
release?

For that matter, why is it necessary to follow testing on an ongoing basis,
instead of just building against everything in stable once it's released?

-- 
Steve Langasek
postmodern programmer

Attachment: signature.asc
Description: Digital signature


Reply to: