[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Unofficial Debian 'testing' FAQ (was Re: Packages in queue for woody?)



On Sat, Dec 08, 2001 at 04:54:00PM +0000, Jules Bean wrote:
> http://people.debian.org/~jules/testingfaq.html

]    2. It must be compiled on (at least) every architecture which
]       the corresponding version in testing was compiled on;

This is basically correct, but it would be slightly more accurate to say
something like "each architecture that has it compiled in unstable must
be up to date".

] A package which satisfies the first four of the above is said to
] be a Valid Candidate.

Actually only the first three need to be satisfied. The latter two simply
(ie, the dep checks) stop valid candidates from actually going in. The info
for point 4 is merely reported in the excuses file.

]  4. How could installing a package into testing possibly break other
]     packages?

Apache and/or gal might be good examples to use, rather than making some up.
Updating apache increments the apache-common version, and all apache modules
depend on apache-common (>= something), apache-common (<< something). Both
gal and apache rev fairly regularly.

]   5. I still don't understand! The testing scripts say that this
]      package is a valid candidate, but it still hasn't gone into
]      testing.

This is where you should point people to update_output now that it's
available again. It gives very terse hints as to which packages break
when a valid candidate is added to testing.

You've not explained (and I'm sure I couldn't *possibly* guess why not) how
packages like apache do get updated. Umm.

To take two examples. One is glibc. When it revs, it often bumps its shlibs
so that newly compiled packages will only install with the new glibc. So you
might end up with:

		Valid candidate: foo -> breaks if installed because its
			dependency on glibc isn't satisfied on some arch

However when glibc is tried, it works fine:

		Valid candidate: glibc -> works fine since everything old
			just has a >= dependency

foo is then tried again (with an "efficient" brute force algorithm), and
we find:

		Valid candidate: foo -> works fine since glibc's now updated

The second example is apache. There are "n" apache modules. Suppose they've
all been updated in unstable to work with a new apache. Then we can try:

		Valid candidate: apache-common -> breaks all the apache modules
			because they Depend: apache-common (<< the current ver)

		Valid candidate: libapache-foo -> doesn't install because it
			Depends: apache-common (>= the new version)

At some point (usually by manual intervention, but, at least in principle,
also by heavier brute forcing), the testing scripts figure:

		Valid candidate: apache-common -> sure it breaks stuff, but
			lets forget that and keep going with things that work;
			if after we've done everything we can it still doesn't
			work, too bad, but maybe it *will* work

		Valid candate: libapache-foo -> hey it works!
		Valid candate: libapache-bar -> hey it works too!
		...

and after everything's been tried, it checks how many packages have been
broken, works out if that's better or worse than what there was originally
and either accepts everything or forgets about it. You'll see this in
update_excuses on "recur:" lines.

	recur: [foo bar] baz

basically says "having already found that foo and bar make things better,
I'm now trying baz, even though that breaks things to see what happens".
"accepted" lines indicate things that appear to make things better,
"skipped" lines make things worse.

Cheers,
aj



Reply to: