Charles Plessy wrote: > Dear Mentors, > > I am packaging a software which includes tests to check if the binary is > fully funcionnal. However, those tests are much more processor intensive > than the compilation itself. I have a similar package but these tests have proved invaluable in identifying problems when the package is ported. True, those problems become FTBFS serious RC bugs but there are few other ways of getting reliable and useful information on architectural differences. > Is it enough to run them when testing the package before > upload, or should them be ran systematically by debian/rules, thus using > cpu power of the buildds? But also catching cases where an assumption in the upstream code is only valid on certain architectures. It's unlikely that any package is built upstream on as many architectures as Debian so you'll always gain information by allowing the buildds to test the code. > In this case, the tests take eight minutes on a 1.8 GHz G5. Sometimes, the tests are iterative and a patch could be used to reduce the number of loops. The question is : How easy is it going to be to fix an architecture specific bug *without* the data from the failed test? A lot of tests are implemented specifically because a bug was found and the test continues checking that the bug remains fixed. -- Neil Williams ============= http://www.data-freedom.org/ http://www.nosoftwarepatents.com/ http://www.linux.codehelp.co.uk/
Attachment:
signature.asc
Description: OpenPGP digital signature