[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting



Greg Folkert schrieb:
On Tue, 2005-03-15 at 00:58 -0800, Steve Langasek wrote:

Hi Aurélien,
On Mon, Mar 14, 2005 at 10:56:51AM +0100, Aurélien Jarno wrote:

Steve Langasek a écrit :

The much larger consequence of this meeting, however, has been the
crafting of a prospective release plan for etch.  The release team and
the ftpmasters are mutually agreed that it is not sustainable to
continue making coordinated releases for as many architectures as sarge
currently contains, let alone for as many new proposed architectures as
are waiting in the wings.

Would it be possible to have a list of such proposed architectures?

I think this has already been answered, by someone who knows better than
I.


[snip]

Architectures that are no longer being considered for stable releases
are not going to be left out in the cold.  The SCC infrastructure is
intended as a long-term option for these other architectures, and the
ftpmasters also intend to provide porter teams with the option of
releasing periodic (or not-so-periodic) per-architecture snapshots of
unstable.

My primary desktop machine is an i386, but it was sometimes ago and for a limited period of time and hppa machine, because my i386 had problems. It allowed me to continue my work on Debian packages. In the case this new infrastructure is set up, does upload from a SCC architecture to unstable would still be allowed? If no, source only upload must be allowed again.

Since non-RC (release candidate) architectures are going to be in the
same unstable tree as the RC architectures (uploads to ftp-master,
etc.), I don't see any reason that this would be disallowed.


- there must be a sufficient user base to justify inclusion on all
mirrors, defined as 10% of downloads over a sampled set of mirrors

AFAIK, only i386 currently meet this criterion.

Of the architectures currently in sarge, that's correct.  It's assumed
that amd64 will easily meet this 10% mark for etch.  (If it doesn't,
then the cut-off probably has to be re-thought, since it doesn't make
much sense to have a 1/11 split between ftp.d.o and scc.d.o,
*particularly* when the 11 archs together *would* most likely account
for > 10%.)


BTW, I am not sure this is really a good way to measure the use of an architecture, mainly because users could use a local mirror if they have a lot of machines of the same architecture. How about using popcon *in addition* to that?

This isn't being used to measure the use of the architecture; it's being
used to measure the *download frequency* for the architecture, which is
precisely the criterion that should be used in deciding how to structure
the mirror network.


Okay, I have to comment here, seeing that I personally have at two
separate locations, two complete mirrors, that I use nearly everyday.
They only update when a change in the archive is detected. That means
*MY* $PRETTY_BIG_NUMBER of usages of my own mirrors in each locale will
mean nothing. I do my own mirror(s) so as to reduce the load on the
Debian network. I actually scaled back what I use, now only having 5
arches I support, SPARC(and UltraSPARC), Alpha, HPPA-RISC, PowerPC and
x86(Intel and otherwise). I dropped IA64 a while ago and will pickup
X86_AMD64 when it become part of Sid Proper.

How would you address the fact the bulk of my usage is not even seen by
your network.


To be eligible for inclusion in the archive at all, even in the
(unstable-only) SCC archive, ftpmasters have specified the following

What about experimental?

experimental would also be available.


architecture requirements:

I would add as for the core set architecture:
- there must be a developer-accessible debian.org machine for the architecture.

This gets a little tricky for non-RC architectures, because if it's not
already (or currently) a released architecture, we have no stable distro
that can be installed on it, which means we have no security support for
it; without security support, DSA isn't willing to maintain it, which
means they probably aren't going to want to put a "debian.org" name on
it, either -- and they certainly won't want to give it privileged access
to LDAP.

You could say that "there must be a developer-accessible machine for the
architecture" without specifying "debian.org"; but I'm not sure that we
should *require* this, either.  Particularly for ports that are waning
and are not expected to become RC architectures in the future, I think
porters should be free to decide whether to spend the effort on
maintaining such a machine since its absence only hurts that port, not
the release.


I am currently in the process of acquiring rotated out of production
machines for 3 of the 5 architectures I support. I make a run to the
right-coast of the US once every 2 months and pickup sometimes 10 - 4-16
processor machines with disk and typically a dozen of GB of memory and
gaggles of disk. I rebuild/recondition most of these machines and
distribute them to NPOs that need this kind of horsepower but can't
afford current stuff or even used stuff from those same suppliers. I put
Debian on them and this makes a huge investment in the long term health
of these Orgs.

If these machines are no longer fully supported by Debian... how can I
continue to do this.


This is important if you want the developers could fix bugs on their package for SCC architecture. This is currently not the case of the alpha port, and that sucks.

Well, FWIW, I think you'll find that the debian-alpha mailing list is
very responsive to maintainers who need help with alpha-specific package
bugs.


Considering you use an Alpha, sure. You have helped me a few times
yourself.

[snip]

I think that supporting a lot of architectures is an important difference between Debian and other distributions. Changing that could have a dramatically influence of what users think of Debian. IMHO, such an important decision should not be taken by a few developers.

Well, if we wanted to make the decision without the input of developers,
announcing it on d-d-a in advance of implementation isn't a very
effective way to make that happen, is it?

I agree that our architecture coverage is important.  I don't know if
stable support for all of these architectures is important, but I *do*
know that stable support for all of these architectures costs us in
terms of the release cycle.  The length of our release cycle is also an
"important difference" between Debian and other distributions, but it's
not a positive one for us.


How much is the difference between Debian running on "Humidifier in the
Basement" reputation, and a "We release more often than Ubuntu"
reputation?

Not that I am disrespecting Ubuntu, fine distro, many people I have
introduced Linux to are running Ubuntu.

But, serious, how much do you think Debian will be hurt with:
Compare these:
        1. Debian the "Universal OS"
        2. Debian the "Almost-Sorta-Kinda-used-to-be Universal OS"
3. "Old as fossilized dinosaur poo, and as stable, but runs on
        everything including the humidifier in the basement"
        4. "Very recent, since it doesn't really support NON-big 4
        processors anyway, so why not run Fedora Core"

Personally, I like 1 and 3. They are the 2nd and 3rd most important
technical reasons I chose Debian. 1st technical reason is the Debian's
maintainability. Please oh please let us not change my mind for me.


Maybe a vote is need...

I'd much rather work towards a consensus.


Me, too. How about we have a CORE of Debian mirror infrastructure or
Base Install only for Stable, Testing, Unstable.

Then either on the same machine(s) or not, a separate mirror
infrastructure for anything beyond that. IOW, any packages that are not
included in the Base install. Including main, contrib and non-free for
Stable, Testing, Unstable and Experimental. Where as the Experimental is
only on the secondaries mainly for major changes testing anyhow.

This is a very feasible, well thought out and scalable option. Heck, we
could release "Base-Install" and let the Buildd(s) for all the arches on
the secondaries catch up.

This gives us five benefits: 1. We will not have be worried about Stable being to OLD. It always
        lets us have faster upgrade cycles to the Stable Base-install.
        The Applications would follow shortly. Once the Stable
        Application buildds get updated, they'll start churning out the
        updated applications.
     2. It allows nearly any number of architectures to be supported by
        Debian. As long as they have a buildd to keep up with stable it
        should be a no-brain-er. Getting the application buildds
        cranking. These being pooled, maybe even cross-compiling
        capacities (if even feasible) might just be the ticket.
     3. Security updates will be much more painless. Allowing the
        security process to have a queue in "Base-install" and
        "Applications" buildds or even just one queue on each arch.
     4. Makes the the "Base-install" be much more widely used and tested
        for just those reasons as security and bugs.
     5. If a "Base-install" buildd goes down, an application buildd for
        that arch gets promoted. minimizing problems for security an
        such, but then also makes a more enticing reason to get the
        failed machine fixed and/or replaced.

It those are not compelling reasons, I don't know what are. Sure, I am
talking out of /dev/ass but, please just think about it.

I agree 100%

We use our own mirrors (rsynced since 2000 via ftp.de.debian.org) as well and usually install wia FAS.

We Droped Solaris from our SPARC systems and OSX from the PPC to have the benefit of apt-get and be done with it. We don't run Ubuntu
on x86, x86_64 and PPC becouse we need a consistent base an all platforms.

We can live with only a base system and buile essencial debs and
compile all other debs from source when needed und stuff 'em on our own
mirror network.

Till now it was no Problem to use the hardware best fitted for a given task as debian was running on it. If that isn't the case any more we would be forced to rething and most likely build our own gentoo based distribution.

greets Uwe
--
Jetzt will man das Internet nicht einfach ein paar Leuten wie der IETF
überlassen, die wissen, was sie tun. Es ist zu wichtig geworden. - Scott Bradner
http://www.highspeed-firewall.de/adamantix/
http://www.x-tec.de



Reply to: