[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting

Steve Langasek wrote:
> This change has superseded the previous SCC (second-class citizen
> architecture) plan that had already been proposed to reduce the amount of
> data Debian mirrors are required to carry; prior to the release of sarge,
> the ftpmasters plan to bring scc.debian.org on-line and begin making
> non-release-candidate architectures available from scc.debian.org for
> unstable.

How is the layout of scc.debian.org planned to look like? Separate
<arch>.scc.debian.org (or scc.debian.org/<arch>/...) archives or a
single one which needs partial mirroring tricks? Will arch:all be
duplicated there or will it need to be fetched from some other mirror?

> - the release architecture must have N+1 buildds where N is the number
>   required to keep up with the volume of uploaded packages
> - the value of N above must not be > 2

So that's:
"Two machines need to keep up as buildds, and we want one more as backup"

> - the release architecture must have successfully compiled 98% of the
>   archive's source (excluding architecture-specific packages)

Does this mean an architecture without Java/Mono/OCaML/foo support is
unlikely to ever become a release candidate?  I think this should be
"98% of the packages it is expected to support".

> Architectures that are no longer being considered for stable releases
> are not going to be left out in the cold.  The SCC infrastructure is
> intended as a long-term option for these other architectures, and the
> ftpmasters also intend to provide porter teams with the option of
> releasing periodic (or not-so-periodic) per-architecture snapshots of
> unstable.

So, if a SCC machine's user has to rely on its stability, he can only
grab a snapshot and hope it works for him. No reliable upgrade path,
no security updates, and there's not even a possibility to fix known
breakage without upgrading to the next snapshot (and introducing the
next slew of bugs that way).

AFAICS that's pretty much "left out in the cold" for any purpose beyond
a hobbyist's toy behind a firewalled network.

Was there any thought given to the idea of doing the releases for SCC
on a best effort basis only, but still in the ftp.d.o archive? This
would give porters who care about releases the means to provide
something more useful than what's outlined above. Basically it means to
put the resposibility for SCC releases mostly in porter's hands, and
drop it on the floor if they don't care about.

The increased number of source packages could be alleviated by purging
binary packages automatically (with an advance warning) from testing if
the "best effort" turns out to be not timely enough. I'm thinking of
something like:
  - Source wasn't referenced for 2 weeks by any release candidate
    distribution -> warning
  - After 4 weeks -> removal of the corresponding binaries
This means weeding out obsolete source packages needs at least a
4 week freeze, which seems to be the minimum anyways.

If an SCC testing repository gets badly out of shape (== lots of
uninstallable packages), dropping it means simply stopping the testing
migration and/or clearing the Packages.gz, depending on the future
prospects to get the work done.

I believe such a time-base scheme would improve the porter's focus
on the actually used and useful packages, without imposing an overdue
burden on the release team and the rest of the project.

If a package has an SCC arch-specific problem while it is frozen for
release, and  the release team finds it too risky/inconvenient/whatever
to break the freeze it would have to be dropped. In worst case
(breakage in a package needed for basic operation) this means the SCC
arch has to skip the release cycle.

The obvious problem is that the archive tools need to be enhanced to
support this scheme, including performance problems which may crop up.
The BTS would also need an architecture flag in order to apply RC bugs
only to the testing migration of the affected architectures.

> Also, since the original purpose of the SCC proposal was to reduce the size
> of the archive that mirrors had to carry, the list of release candidate
> architectures will be further split, with only the most popular ones
> distributed via ftp.debian.org itself.  The criterion for being distributed
> from ftp.debian.org (and its mirrors) is roughly:
> - there must be a sufficient user base to justify inclusion on all
>   mirrors, defined as 10% of downloads over a sampled set of mirrors

How are releases and mirroring related in that scheme? A 10% quorum
for mirroring is surely more than some of the anticipated release
candidates will have. Will the release archive be split per
architecture? If it is split, what is scc.debian.org good for?


Attachment: signature.asc
Description: Digital signature

Reply to: