Re: Testing transitions before uploading to unstable
Lars Wirzenius <email@example.com> writes:
> You're Devon Deppler, the maintainer for the Xyzzy set of packages, a
> total of a dozen packages. They're important and influential: about a
> quarter of the entire archive depends on the packages in one way or
> another. You have lots of users.
> Now upstream comes up with a new version, and changes some things in
> fairly fundamental ways. They're good changes, and they need to be made.
> The old way of things has been causing some problems already. The
> problem is, the changes aren't backwards compatible and they will break
> other packages. Some packages won't be building anymore, and others
> won't install or operate properly until they're updated.
> Your mission, should you accept it, is to make things happen so that
> when you upload the new packages, as little breaks for as short a time
> as possible. Should you, or any of your co-maintainers, fail to be
> perfect, the bug tracking system will be flooded with bugs and people
> will be calling you names.
> Sound familiar?
> Transitions are going to happen in Debian, but we don't seem to have
> good tools to deal with them. It strikes me that it would be cool to
> have something like this:
> * You upload your new packages to a staging system.
That is called experimental or unstable.
> * A fully automated tool upgrades an existing system (a chroot,
> a Xen instance, or whatever) to your new packages, and reports
> if all went OK. It also runs automated tests to see that
> everything works, before and after the upgrade.
> * If that worked, it re-builds your packages and everything that
> build-depends on your packages and reports results.
I have that planed for when I have some spare time. I actualy want to
do a few such test:
- build all reverse Build-Depends of a package
- build the package against the lowest possible versions (minimum
stable) as specified by Build-Depends.
- same but with a minimum of testing
> * If there were any problems, you can fix packages and try
> again. As many times as you need to. You can also include fixed
> versions of other packages to test them, too.
I think this is best left to unstable/experimental. Adding yet another
layer of distributions would just increase the workload managing them.
> In my vision, this system would have enough computing power to be able
> to iterate even a large transition at least once per day, maybe more
Only for the main archs but that is better than none.
> All the components for this already exist. We have autobuilders, we have
> upgrade testing, we have a tool for automated testing of package
> functionality. There is, in theory, nothing to stop anyone from doing
> this on their own system (and I assume some have done so), but it would
> be so much easier to not have to master every component tool yourself.
> Also, since this requires a lot of computing power to be efficient, it
> is not something people who work only on an old laptop can do very well.
> I think that if we created a centralized system for this (or possibly a
> distributed centralized system), it would be possible to make it fast
> enough to be quite usable.
> I think this would be possible to do. I don't have time for it myself,
> at least until Debconf, and I don't know some of the components
> (especially the buildds), but I suspect that for someone who does, this
> would be relatively straight-forward to assemble together. Anyone
All you need is to setup a central wanna-build and some script to
reschedule reverse Build-Depends and get people to setup buildds. If
you know what you are doing this is a matter of hours. More if you try
it the first time.
Nobody is stopping you.