Re: dpkg development cycle
On Wed, 30 Jan 2008, Guillem Jover wrote:
> > Thus I'm wondering if we shouldn't follow the linux development model.
> > Have a cycle of say one month, merge stuff aggressively during 10 days,
> > make an upload to experimental and run the new dpkg on our own computers
> > during 20 days more and then upload to unstable (once bugs have been
> > ironed out).
> I'm not sure the linux development model applies here, we don't have
> that much code pending to be merged. Most of that pending stuff is
> waiting for review, design decisions, rewrites, etc.
Yes, the exact details are not the same and I also believe that we don't
need 20 days of testing (when we're not introducing major changes). I
admit the figures were just taken in the void as an example... I think the
a shorter cycle is perfectly fine but I really think that we need to
announce a date when we stop pushing new stuff and at which point all
developers are urged to build and run what's in HEAD. And during that
period, if we continue work on new stuff, we do it on local branches that
we push immediately after the release.
We should have been able to catch the s-s-d problems for example when
running the new dpkg during a few dist-upgrade...
> And the experimental uploads as a general rule seem like overhead to me,
> given that most of us are going to be running the code from HEAD anyway,
> and some stuff will be difficult to spot w/o uploads to unstable.
Yes, I agree experimental should only be used occasionnaly for example
once trigger support is merged or other very big changes.
> The model we have been using (at least on my head) during last year or
> so has been: coding, testing locally, uploading to unstable, waiting
> and seeing during few days for regressions, and doing fast uploads to
> fix them. After the BTS seemed to have been stabilized start next
Yeah, the only changes are :
- add "test by other dpkg developers" so leave some time between push and
release. And announce the test period so that we avoid pushing new stuff
and keep it in local branches that we merge once the next release cycle
- parallelize the stabilizing in unstable and the start of the next
release cycle since git makes it easy for us to handle that.
> I'm not against improving the current model, but I think we probably
> should do incremental fixes to it (like the stable uploads), and see
> what happens, or what might need fixing/improving.
Fine, we're not that far from what I described/proposed already.
Le best-seller français mis à jour pour Debian Etch :