[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: glibc 2.1 and compatibility (Was: slink is gone, goals for potato?



On Wed, Mar 03, 1999 at 11:39:27AM -0500, Andrew Pimlott wrote:
[snip]
> 
> The proposal:
> 
> Debian can provide a useful service by adding new, unsupported
> mini-distributions (one per stable dist) for packages from unstable compiled
> against (past and present) stable systems.  It would exist merely as a
> courtesy to users, and developers would be under no pressure to contribute
> to it.  unstable would remain the focus of development.
> 
> The guidelines for uploading to this mini-dist are simple: don't upgrade
> "system-level" packages (libraries, low-level utilities, standard shell
> commands), and only add packages that are reasonably expected to work (have
> lived in unstable for a bit without problems).  Both of these criteria
> require judgements, but in practice the answers are usually obvious, and we
> can err on the side of caution.
> 
> The distribution would be clearly labeled as unsupported, available only as
> a courtesy to users.  However, it should be available in the same way as
> other distributions.  (I don't know what to call it--perhaps
> hamm-unsupported-updates?)
> 
> I said this is light-weight.  I really mean it--developers should be free to
> ignore the new dists completely.  However, inevitable it will take some
> energies away from unstable, so I understand if people don't want to do it.
> I do hope to convince you that it's the only solution to the "Debian is
> obsolete" gripe that should be considered at the current time.  It also has
> the feature that, if successful, the project could scale up gradually.
> 
> And I think users would appreciate it greatly.
> 
> Andrew
> 
This fits in somewhat with something I was planning on writing in response
to the thread about freezing potato.  I just hadn't figured what exactly
I wanted to say.  I started thinking about the concept of "freezing'.
In the real world, it doesn't happen all at once, there are stages.
When the distributions were smaller, maybe it made more sense to freeze
all at once, but I'm not so sure that it's the best way anymore.  What I'd
like to suggest (I can't propose anything not being a developer yet)
is to take a cue from the physical world and see if freezing in stages
makes sense for future releases.

What I have in mind is to divide up the packages into groups to make
the freezing process more "robust" (if you can apply that to a process).
Something along the lines of:

 1. freeze "infrastructure" stuff like filesystem layout and policy
    rules applicable to this release
 2. freeze default kernel for this release
 3. freeze compilers and interpreters used for package building and
    package building tools
 4. freeze system and other widely used libraries and the rest of base
 5. freeze other libraries (except those just used by a single app and
    installed with the app)
 6. freeze apps
 7. freeze boot-floppies, installation instructions, release notes etc.

This may not be the best way to break things up or the best order to do
them in but, ideally, the ordering should reflect a "process dependency".
The packages being built later should be able to count on some things
being stable (i.e., frozen earlier) even if other things are still
in flux.  The goal is to help the freezing distribution to stabilize
more quickly than it does now by freezing earlier those packages that
other packages have a process dependency on.

For any one package, it could still be considered either frozen or
not and uploaded to unstable or unstable and frozen, or whatever, just
as is done now so the burden on developers shouldn't be significantly
increased except for keeping track of which packages are in which state.
Hopefully this wouldn't be too big of a deal unless you're handling
lots of packages.  However, it will probably affect the distribution
organization (i.e., is just 'frozen' enough?) and I don't have a good
suggestion yet for how it should be organized to support this scheme.

This scheme could support development of multiple releases in parallel
and could shorten the release time for individual releases (which is the
tie in with Andrew's suggestion).  Even though the burden on individual
developers shouldn't be significantly affected, the burden on the release
manager will probably increase significantly.  Especially if multiple
releases are progressing in parallel and are in different freeze stages.
Probably one release manager per release would be reasonable in this case.

Hopefully this suggestion is useful.  If not, I'd be curious to know if
there are some Software Engineers out there who can propose other life
cycle management methods to improve the release process, especially
proposals aimed at shortening the release cycle and reducing the
end-of-freeze chaos.

Steve Bowman, Sr. IT Engineer, Arizona Public Service Company
Palo Verde Nuclear Generating Station, AZ, US
My opinions are my own and do not represent my employer.
keyid=FD67C7E9  fingerprint = 4A 30 DF A2 BC 4F 3F DB  F0 AE F1 5D FA E9 EA 8F
key on public servers


Reply to: