[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Did testing survive the hazard party?



Anthony Towns wrote:
> I had one problem where the experimental X4 .debs mixed with the testing
> .deb's broke on me: the libc6 shlibs file was wrong, so the X4 debs
> installed quite happily even though they actually needed the new libc
> which I didn't have. That's what you get for running experimental X4 debs
> though, I guess.
> 

Just a thought: In order to admit packages into testing you run some
sort of test. That is, every package must test this pass in order
to get into testing. Now, I'd read that this test contains
"installability" (if that's the right spelling), which also implies
a dependency satisfaction.

After my mail system blew up, I reckoned that a more format set of
tests could be worthwhile. In other words, we usually trust
_people_ for testing our programs, but we don't have a rigorous
test suite for crucial operating system services. The current test
policy of "waiting for people to cry out if something serious
breaks" may not scale well into a bigger distribution. [In fact,
OS vendors usually have many test engineers who develop such test
suites]

If I don't propose any suggestion, I'm sure that I'll get more flames
so here it goes: we can implement an automated test suite mechanism
that not only tests packaging policy and distribution requirements (global
rules such as a complete package dependency graph) but also actual
operation of systems.

Example: okay, let's make a tentative install of this new exim package.
does it work before install? yes (if no, backtrack to find which package
upgrade caused it!) okay, install new exim. run the test suite again.
does it mess up? yes, oops, reject this package. (if no, admit it) 

As I said just a thought, so take it with your regular supply of salt.

Thanks,

-- 
Eray (exa) Ozkural
Comp. Sci. Dept., Bilkent University, Ankara
e-mail: erayo@cs.bilkent.edu.tr
www: http://www.cs.bilkent.edu.tr/~erayo



Reply to: