[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting



On Tue, Mar 15, 2005 at 11:54:24AM +0000, Henning Makholm wrote:
> Scripsit Steve Langasek <vorlon@debian.org>

> >> I would add as for the core set architecture:
> >> - there must be a developer-accessible debian.org machine for the 
> >> architecture.

> > This gets a little tricky for non-RC architectures, because if it's not
> > already (or currently) a released architecture, we have no stable distro
> > that can be installed on it, which means we have no security support for
> > it; without security support, DSA isn't willing to maintain it, which
> > means they probably aren't going to want to put a "debian.org" name on
> > it, either -- and they certainly won't want to give it privileged access
> > to LDAP.

> So how can an architecture ever become releaseworthy? It will not get
> release-certified before it has a a debian.org machine, and it cannot
> get a machine in debian.org before it has a stable version with
> security support, and it's not allowed to create a stable version and
> provide security support for it before it has been release-certified.

Yes, this is a bootstrapping problem in the proposal, which was
recognized by the ftpmasters at the meeting.  Is it ok to assume that
DSA will come up with a reasonable solution for this logic hole, or do
you think this needs to be discussed further?

My assumption is that, for *new* RC architectures, the requirements "the
DSA must be willing to support debian.org machine(s) of that
architecture" and "there must be a developer-accessible debian.org
machine for the architecture" are requirements that go into effect once
the architecture has been released as stable, but that everyone needs to
agree to complying with *before* it is considered an RC arch.

> > Well, if we wanted to make the decision without the input of developers,
> > announcing it on d-d-a in advance of implementation isn't a very
> > effective way to make that happen, is it?

> If you wanted to make the decision _with_ the input of developers, why
> did all the powers that be vehemently deny that the number of
> architectures was a problem for the release schedule, right until
> everyone turned on a platter and presented this fait accompli?

Assuming "powers that be" means "the release team" here, we did not make
any such claim, vehemently or not.  Joey Hess has already talked,
repeatedly, about the personal time investment he's made getting d-i
ported to all architectures; the preceding release announcement
mentioned the fact that some architectures were short on kernel
developers and needed more help to get the turnaround time for security
fixes down to an acceptable level; and per-architecture problems on one
architecture or another have been a frequent topic in release team
announcements over the past year.

There have been some attempts to dispel FUD about *why* having a high
arch count is bad for the release, but that's not the same thing.

> > but I *do* know that stable support for all of these architectures
> > costs us in terms of the release cycle.

> The solution to that is to decouple the secondary architectures from
> the release cycle of the main architecture. There is no visible reason
> why the solution has to include a ban on making any stable release for
> a minor architecture at all.

The question is, if we're not going to be releasing in tandem, and the
source packages aren't going to be kept in sync (various people have
already implied that any "stable" release for these archs is going to
require separate/patched sources, which isn't really a good thing), and
the existing release team is not going to be managing that process, is
it actually still a *good thing* to tie it to our existing Debian
infrastructure in other ways?

Once you start talking about having divergent packages between
architectures, a lot of the reasons I'm hearing from people about why
they want Debian to *do* releases for these archs seem to dissipate,
because they no longer have assurances that the OS is "the same" on
different hardware.  If unstable-only ports aren't enough, and the
sources aren't going to match in the testing/stable versions, maybe we
start to think about wanting to implement parallel infrastructure for
these other ports, as well -- and maybe it's under the Debian umbrella,
and maybe it isn't; I think it's better if it *is* still "Debian", but I
think we need to be realistic about the fact that once we have to trim
those architectures from the list of architectures being released in
lockstep, we've removed the pressure that keeps the sources in sync and
it's no longer going to match the stable Debian that we're providing on
other architectures.

I'd love for this discussion to result in a plan that gives porters the
resources they need to do the stable Debian releases they want, without
putting the burden on the non-parallelizable release team to make it
happen.

-- 
Steve Langasek
postmodern programmer

Attachment: signature.asc
Description: Digital signature


Reply to: