Re: Preparing the next Release
On Wed, Aug 30, 2006 at 01:08:27PM +0200, Martin Schulze wrote:
> Steve Langasek wrote:
> > On Wed, Aug 30, 2006 at 07:44:27AM +0200, Martin Schulze wrote:
> > > These bugs are at least to be investigated and maybe resolved in a
> > > rather pragmatic way:
> > > > - packages that FTBFS
> > > . on which architecture?
> > As long as it's a release architecture, does this matter? Doesn't the
> > security team still typically wait for a security update to be built on all
> > archs before releasing it?
> I'm not sure if it makes sense to reply to each of your paragraph,
> since you're the one to decide anyway, not me.
> However, I believe that such questions need to be raised and
> investigated in time, and maybe some packages need to be dropped from
> the release or from particular architectures.
Right, as noted, FTBFS on an arch where the package has never built are not
treated as RC. Build regressions for any release arch are; so they have to
be addressed in some fashion for the release, hence "release-critical".
Dropping support for the package on the arch in question is one of the
options for dealing with this.
> With regards to security support, all packages that the security team
> needs to support must to be compileable by the buildds for all
> architectures they were released for.
Ok, so that settles that :-) I didn't really expect any different answer.
> While I would love to have the exact set of packages on all
> architectures, I need to accept the fact that on some architectures
> some packages just don't work or cause an FTBFS we are unable to fix
> in time for the release.
In fact, we had quite a diligent group of folks helping to rebuild the
archive both before and during the sarge freeze, so I'm pretty sure that any
FTBFS bugs that remained at release time were numbered in the single digits.
> > > > - data loss bugs
> > > . data loss always or only in some esoteric situations?
> > Do you think it should be ok for packages to lose users' data only /some/ of
> > the time? I don't, FWIW...
> Well, we've already had packages that can potentially use data when
> you use three quite uncommen switches, two undocumented commandline
> switches, one broken config line and the system has three processors
> with a permanent load of 53.
> This is a bit exagerated, but I hope you'll get an idea of what I was
> trying to express. When the data los *can* happen in only certain
> quite uncommon circumstances and the rest of the system and of the
> package works fine, I'm not too sure we should delay the release to
> get this problem fixed before. Especially since the problem exists
> for n months already without the hell being frozen.
> Such problems usually warrant an inclusion in a point release, so can
> be fixed after the release as well.
If the conditions under which the data loss occurs are something that the
user can *control*, I agree that documenting the problem might be ok. If
it's a matter of "any time you run this program, you run the risk of data
loss, but the probability is low", I would almost always prefer removing the
package instead of shipping it with such a bug.
> > > > - bugs related to removal of obsolete libs
> > > . how widely is it used?
> > > . can the package be removed without hurting the overall system?
> > > . can the package be removed on the particular architecture?
> > Would you care to look at bug #370429 and render your own opinion on the
> > impact of trying to remove it in advance of transitioning its
> > reverse-depends?
> Well, as a second step (after declaring it obsolet) it should be tried
> to fix the dependencies and upload packages that don't depend on this
> but the new version.
> In this case this seems to be
> libsoup2.0-0 libsoup2.0-dev libsoup2.2-dev
> libxmlsec1-gnutls libxmlsec1-dev
> libggz2 libggz-dev
> Bug reports need to be written and maintainers be contacted. I
> remember that Smurf announced that he'd like to remove that library,
> not sure about the other steps, though.
> To mitigate the problem, at least ofx, lynx, ggz-kde-games and
> probably ggz-kde-client could be temporarily removed from etch and
> migrate back when the dependencies don't require TLS 11 anymore. The
> libraries would have to be investigated further. (I'll do it if it
> helps you.)
Actually, all things considered the libgnutls11 transition has gone very
well, and the last remaining packages in unstable which depended on it
have conveniently had maintainer uploads in the past few days. So there's
no particular need for further investigation here. :) In the beginning,
though, there were quite a few packages to go through getting NMUs/rebuilds
Another transition that today is in an earlier stage is the
mozilla->xulrunner transition. I've asked on #debian-release what people
thought should be done if seamonkey isn't packaged in time for etch --
should mozilla and all its reverse-deps be dropped because it's not
security-supportable, or should they be kept rather than removing what's
still a large number of packages with a large install base? Does the answer
to this question change with the number of reverse-dependencies still
> Speaking of being ready for a freeze, I would consider this date
> generally reached already. Most of the packages are in a considerably
> fine state. There are still problems, though, but there will always
> be problems anyway. When the installer is working fine on all release
> architectures and the kernel and udebs have been migrated into testing
> as well, we already have a *pretty* *good* system.
I think the main obstacle to freezing right now is the firmware question,
really. If the project decides that etch should be deferred until
sourceless firmware blobs are out of main, then it doesn't make sense to
start freezing yet.
Steve Langasek Give me a lever long enough and a Free OS
Debian Developer to set it on, and I can move the world.