Re: Temporal Release Strategy
On Thu, Apr 21, 2005 at 01:04:34AM +0200, Adrian Bunk wrote:
> Let my try to explain it:
> The "debian stable == obsolete" is a release management problem of
> Debian. One release every year and it would be suitable for most
This is the problem. Debian has NEVER been able to have a release every
year. Most server administrators I know would prefer a release cycle
longer than 12 months, most desktop users would prefer around 12-24
The issue has always been one of "how many RC bugs are acceptable in the
release" and this has always been at the discretion of the release
> You say you've deployed Debian sarge and sid in server environments
> (even sarge, although months old security fixes might be missing???).
> Let me ask some questions:
> - How many thousand people can't continue working if the server isn't
For comparative purposes, I have worked as systems/network/admin where
the number has been as small as 50 and as large as 30,000.
> - How many million dollar does the customer lose every day the server is
> not available?
We measured it in millions of dollars per hour, not day.
> - How many days without this server does it take until the company is
We never got to that point, because it was simply not an option.
> If the mail server of a small company isn't running for a few hours it's
> not a problem - but there are also other environments.
Since you seem to be trolling, I'll feed the troll: If that small
company relies on the email server to take orders from customers, that
few hour outage could translate into a large amount of money. If that
small company is not financially sound, that few hour outage may be the
cause of that small business failing. Large organizations are much
better equipped to weather a temporary outage (larger cash reserves,
ability to implement backup systems, etc).
> Regarding things broken in woody:
> In many environments, the important number is not the total number of
> bugs but the number of regressions. Doing intensive tests once when you
> install/upgrade the machine is acceptable, but requiring this every
> month because it's required for the security updates that bring new
> upstream releases is not acceptable.
This is the current norm now if you take seriously your system's
stability. If you really value stability, you will test each and every
application on the system each time a change is made. You would
be examining each security patch to make sure you understood what was
happening and that it was safe before you installed it.
> > >Look at the third use case I explained above. For these users of Debian,
> > >long-living releases where the _only_ changes are security fixes are
> > >_very_ important.
> > Again, I don't think you ever built a commercial product around Linux
> > based on your statements here. No offence if you have, maybe it's just
> > corporate culture differences between the EU and US?
> There are reasons why companies pay several thousand dollars licence
> fees for every computer they run the enterprise version of some
> distribution on. E.g. RedHat supports each version of their enterprice
> edition for seven years. A few thousand dollars are _nothing_ compared
> to the support costs and man months that have to be put into setting up
> and testing the system.
So it should be no problem for those companies who choose to run Debian
to forward a small donation to Debian for all the thousands they save.
Or maybe they should allow their staff to spend several hours a week
getting paid to contribute to Debian.
My point is Debian is NOT a corporate product. If it is found to be
useful by corporations that's great for them. If the corporations want
to run Debian, there are companies that offer similar support for Debian
that RedHat and Novell offer for their respective distros.
Since Debian is not a corporate product, Debian is free to investigate
and try different strategies without worrying about the monetary impact
of those changes in the same way a corporate distribution has to. We
can innovate because it makes sense, not because it is good for the
> Debian stable is ancient - but that's something you have to ask the
> Debian release management about. If the officially announced release
> date for sarge is now missed by more than one and a half years this is
> the issue where investigation should take place.
Which is the issue I was attempting to suggest a possible solution to.
> Regarding sarge:
> I do personally know people who had serious mail loss due to #220983. At
> the time I reported this bug, it was present in sarge. This problem
> couldn't have happened in a Debian stable (because it would have been
> discovered before the release would have been declared stable). This
This is the biggest delusion I have ever heard. Any piece of software
can have a critical and undiscovered bug. Just because it was not
discovered before someone arbitrarily decided to release it does not
mean it is not there. If a bug requires an unlikely set of events to
coincide before it is triggered, it could be YEARS after the software is
released before the right set of conditions happens.
> kind of problems that can occur every day in sarge _are_ dangerous
The kind of bugs yet undiscovered in testing and stable are also
dangerous, but not as dangerous as believing all the major bugs are
caught in testing. How many security updates have been issued for
packages in stable since it was released? "Here there be dragons."
It is not about presenting a bug free distribution, but about managing
the risks associated with the bugs that may be undetected at the time
the release is released. I believe a second stage between testing and
stable would allow better management of that risk, by providing an
almost frozen area where further testing of packages would be able to
take place. The natural progression is then to declare those packages
"production quality" at some point in time after they entered the candidate
stage. If you really believe the package to be production quality, you
should have no problem calling it stable.
Amateur Radio: KB8PYM