[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting



Hi Thiemo,

On Mon, Mar 14, 2005 at 05:39:27PM +0100, Thiemo Seufer wrote:

> > This change has superseded the previous SCC (second-class citizen
> > architecture) plan that had already been proposed to reduce the amount of
> > data Debian mirrors are required to carry; prior to the release of sarge,
> > the ftpmasters plan to bring scc.debian.org on-line and begin making
> > non-release-candidate architectures available from scc.debian.org for
> > unstable.

> How is the layout of scc.debian.org planned to look like? Separate
> <arch>.scc.debian.org (or scc.debian.org/<arch>/...) archives or a
> single one which needs partial mirroring tricks? Will arch:all be
> duplicated there or will it need to be fetched from some other mirror?

I don't know the details of what's currently planned for the mirror layout.
I know that per-arch partial mirroring was a goal at one time, but AIUI
the current thought is that a two-way mirroring split should be enough to
start with.  I don't know the reason for the change, although I suspect that
a per-arch mirroring solution, accessible to mirror operators with limited
toolkits while not causing needless data duplication, is a difficult set of
requirements to code for.

There would definitely be duplication of arch:all between ftp.debian.org
and ports.debian.org (let's call it ports), as well as duplication of the
source.

> > - the release architecture must have successfully compiled 98% of the
> >   archive's source (excluding architecture-specific packages)

> Does this mean an architecture without Java/Mono/OCaML/foo support is
> unlikely to ever become a release candidate?  I think this should be
> "98% of the packages it is expected to support".

No, packages that have not been ported to an architecture due to non-trivial
platform assumptions would be counted as architecture-specific.  Basically,
anything that it would be reasonable to put in Packages-arch-specific should
be excluded from the count.

> [snip]
> > Architectures that are no longer being considered for stable releases
> > are not going to be left out in the cold.  The SCC infrastructure is
> > intended as a long-term option for these other architectures, and the
> > ftpmasters also intend to provide porter teams with the option of
> > releasing periodic (or not-so-periodic) per-architecture snapshots of
> > unstable.

> So, if a SCC machine's user has to rely on its stability, he can only
> grab a snapshot and hope it works for him. No reliable upgrade path,
> no security updates, and there's not even a possibility to fix known
> breakage without upgrading to the next snapshot (and introducing the
> next slew of bugs that way).

In this particular proposed model, yes, upgrading to the next snapshot would
be the only option.  As for reliable upgrade paths, that would be something
for the porters to decide; they would be setting their own QA standards for
the snapshots.  Reliable upgrades are a good one to have, and usually very
achievable -- although FWIW, even in sarge sometimes this means choosing
between dropping a package, and shipping it without a sane upgrade path for
the users.

> Was there any thought given to the idea of doing the releases for SCC
> on a best effort basis only, but still in the ftp.d.o archive? This
> would give porters who care about releases the means to provide
> something more useful than what's outlined above. Basically it means to
> put the resposibility for SCC releases mostly in porter's hands, and
> drop it on the floor if they don't care about.

I think this is the essence of what we're talking about doing, though you
seem to have different ideas about the technical details (below).

> The increased number of source packages could be alleviated by purging
> binary packages automatically (with an advance warning) from testing if
> the "best effort" turns out to be not timely enough. I'm thinking of
> something like:
>   - Source wasn't referenced for 2 weeks by any release candidate
>     distribution -> warning
>   - After 4 weeks -> removal of the corresponding binaries
> This means weeding out obsolete source packages needs at least a
> 4 week freeze, which seems to be the minimum anyways.

> If an SCC testing repository gets badly out of shape (== lots of
> uninstallable packages), dropping it means simply stopping the testing
> migration and/or clearing the Packages.gz, depending on the future
> prospects to get the work done.

If a port is going to try to keep up with the release, then I think it
should go all the way; if we're going to constrain the new versions of the
package that are allowed into testing based on what the core RC archs are
doing, then I think you also have to consider just kicking the arch out of
testing completely when it starts lagging behind this badly.  Is a port that
lags 4 weeks behind a package on any kind of regular basis actually going to
be useful once it's tagged "stable"?  4 weeks is barely as long as we're
allowing for the sarge freeze, and *every* upload during that period is
likely to be an RC bugfix that architectures would want to keep up with.

If a port isn't keeping up, and the logical thing to do is to kick it out of
testing altogether (which we wouldn't want to have to do too often, really,
as bouncing architectures in and out of testing isn't a great use of the
release/ftp teams' time; but is probably better than carrying around an
ever-more-stale set of sources in testing for this arch), we still have the
question of what to do with/for architectures that can't keep up, but want
to be able to do a stable release for their users.

> I believe such a time-base scheme would improve the porter's focus
> on the actually used and useful packages, without imposing an overdue
> burden on the release team and the rest of the project.

Given that there's discussion at all about porters focusing on "actually
used and useful packages", maybe it's better here to discuss what a common
set of such packages might look like for minority ports that can't / aren't
willing to keep up with the whole Debian archive.  Isn't that better,
really, than for porters to feel obliged to keep up with builds for packages
that aren't "used and useful", just so they can have security support?

> The obvious problem is that the archive tools need to be enhanced to
> support this scheme, including performance problems which may crop up.
> The BTS would also need an architecture flag in order to apply RC bugs
> only to the testing migration of the affected architectures.

I understand architecture flags shouldn't be a problem; but performance of
archive tools could be.

-- 
Steve Langasek
postmodern programmer

Attachment: signature.asc
Description: Digital signature


Reply to: