[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from the Security Team (for those that care about bits)



On Mon, Jan 24, 2011 at 07:48:18AM +0100, Iustin Pop wrote:
> IMHO what would be a sufficient first step would be much simpler:
> - being able to know if a package does offer build & post-install tests
> - how to run such tests
> - for post-install tests, what are the depedencies (Test-Depends? ;-)

True. However, we are trying to think this somewhat trough so we have a
better picture what will be typical use cases for testing within Debian.
One thing we need to deal with in our field are tests that require
substantial amounts of additional data (not part of any package right
now) and sometimes also the test suite is not part of a source package,
but shipped separately (due to size, ...).

> This would allow a maintainer to implement an automatic test of his
> packages whenever doing a new upload (which is my personal issue :). A
> framework like your proposed DebTest can then build upon such basic
> functionality to provide coordinated, archive-wide or package-set-wide
> running of tests.

Yes, that was the idea.

> A few comments on your proposal:
> 
> - “Metainformation: duration”: how do you standardise CPU/disk/etc.
>   performance to get a useful metric here?
>
> - assess resources/performance: in general, across
>   architectures/platforms and varied CPU speeds, I think it will be hard
>   to quantify the performance and even resources needed for a test suite

All these are just meant to be things to consider while planning. I
personally don't believe it is possible to give an accurate 'in-advance'
estimate of test-runtime (given the large variety in performance).
However, it might still be valuable to indicate whether a test is
relatively resource hungry. Maybe it turns out that anything but a tag
'slow' is impossible to achieve. But we also thought about a
Debian-dashboard for test results. That could gather information about
the typical resource demands (per architecture) and give more accurate
estimates.

> - some structured output: given the variety of test suites, this might
>   be very hard to achieve; in my experience, the best that can be hoped
>   for across heterogeneous software is a count of pass/fail, and log
>   files should be left for human investigation in case of failures

True. Although not being able to be smart in some cases doesn't imply
that we don't try to be clever when we can. We want to aim for a system
that has a very low entry threshold. In the simplest case the package
maintainer only places a symlink/script/binary with the package name
implementing the test suite into a certain directory. When called and
exiting normally the test will count is passed or failed otherwise. the
"structured" output in this case is stderr and stdout.

However, there are subsystems of Debian with more standardized
approaches to testing (e.g. Python). DebTest should have the ability to
add plugins for specific test environments to allow people to make it
more clever. We only need to make sure that more structured information
about test results have a place to go to and being handled in this
framework.


Michael

-- 
Michael Hanke
http://mih.voxindeserto.de


Reply to: