[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits (Nybbles?) from the Vancouver release team meeting



Daniel Jacobowitz wrote:
My basic idea is to have something similar to the testing migration
scripts, which takes the decisions of the "master" copy running on
ftp-master as an input.  At a minimum:

I think it's easiest just to assume everything's on ftp-master; for mirroring, stuff's already planned to be split onto ftp.d.o and scc.d.o anyway. The only reasons that wouldn't happen is if ports need non-developers to be able to upload, or are using up lots of disk really wastefully.

  - Packages in sub-testing should not be newer than the versions in
    testing, except on purpose.  Porters need to be able to use newer
    versions when a particular version does not work on their
    architecture, but I want a by-hand element involved in that.  In
    normal, non-schedule-pressured, non-crippling-bug mode, they would
    just fix the copy in the main archive and propogate that to
    testing, and from their to sub-testing.

Okay, so we've got a new suite; is that global for all scc arches, or separate, a la "subtesting-s390", say? The question there is "Will s390 have a different version of the package to m68k, if one or the other is being more aggressively maintained?"

So, say you have "foobar 1.0" in subtesting for s390, and "foobar 2.0" in testing and "foobar 3.0" in unstable for i386. Your buildd's been somewhat broken for a few weeks, and you haven't built foobar 2.0 or 3.0 yet, but you've got it working again now. What happens then?

Do you build foobar 3.0, find out there are some s390 specific bugs, watch it go into testing anyway, not accept it to subtesting because those bugs need fixing, get 3.1 uploaded to unstable, built it, watch it go into testing, and then have it go into subtesting too?

Or do you build foobar 2.0, upload your debs to unstable, find it works perfectly, and get into subtesting, then wait 'til 3.0 gets into testing before building it and finding out it has problems?

Do you use the testing scripts, and thus aim to ensure subtesting's dependencies are consistent, or do you just copy debs across and hope? If the former, why bother looking at testing at all, instead of just pulling from unstable, and calling it, say, "scc-testing-s390"? OTOH, only pulling from testing makes it simpler to end up with something you could call "etch" for s390.

  - Internal consistency and installability would be maintained for
    the sub-testing repository in the same way we maintain them for
    testing.
This allows the port to leverage the excellent work done by the release
team, and not get in their way - it's completely unidirectional,
nothing feeds back to the "parent" repository.  And it allows leverage
of the testing scripts - with some changes, that someone would have
to pony up the time to implement, of course.

One of the problems with this is that you wouldn't benefit from the "hints" the release team prepares for britney; which might screw you over completely. OTOH, dealing with smaller numbers of architectures is easier, so maybe some of the existing automation would be more effective; and maybe the other britney features we planned at the meeting will make this irrelevant anyway.

I would really like to see some real use cases for architectures that want this; I'd like to spend my time on things that're actually useful, not random whims people have on lists -- and at the moment, I'm not in a good position to tell the difference for most of the non-release architectures.

Cheers,
aj



Reply to: