[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from the Security Team (for those that care about bits)



On Sun, Jan 23, 2011 at 06:45:56PM -0500, Michael Hanke wrote:
> On Mon, Jan 24, 2011 at 12:19:32AM +0100, Iustin Pop wrote:
> > First, tests run during a package build are good, but they do not
> > ensure, for example, that the package as installed is working OK. I've
> > been thinking that (also) providing tests to be run after the package is
> > installed (and not on the build results) would be most useful in
> > ensuring that both the build process and the packaging is correct. 
> > 
> > Second, README.test are designed for human consumption, whereas a
> > standardisation of how to invoke the tests would allow for much more
> > automation. E.g. piuparts would not only be able to test that the
> > install succeeds, but the automated tests also work.
> 
> Exactly. In the NeuroDebian team we started playing around with more
> comprehensive testing -- both regarding single packages, but also
> integration tests involving multiple packages. We started composing a
> SPEC for a testing framework, but we haven't gotten very far, yet. What
> we have is here
> 
>   http://neuro.debian.net/proj_debtest.html
> 
> and here
> 
>   http://git.debian.org/?p=pkg-exppsy/neurodebian.git;a=blob_plain;f=sandbox/proposal_regressiontestframwork.moin
> 
> If somebody is interested in working on this topic, we'd be glad to join
> forces.
> 
> Originally, we wanted to develop the SPEC a little further, but since
> the topic came up, I figured it might be better to add these pointers
> now.

Thanks for sharing. Your proposal seems to focus on a higher level, e.g.
group-based testing, resource and scheduling, etc.

IMHO what would be a sufficient first step would be much simpler:
- being able to know if a package does offer build & post-install tests
- how to run such tests
- for post-install tests, what are the depedencies (Test-Depends? ;-)

This would allow a maintainer to implement an automatic test of his
packages whenever doing a new upload (which is my personal issue :). A
framework like your proposed DebTest can then build upon such basic
functionality to provide coordinated, archive-wide or package-set-wide
running of tests.

A few comments on your proposal:

- “Metainformation: duration”: how do you standardise CPU/disk/etc.
  performance to get a useful metric here?

- assess resources/performance: in general, across
  architectures/platforms and varied CPU speeds, I think it will be hard
  to quantify the performance and even resources needed for a test suite

- some structured output: given the variety of test suites, this might
  be very hard to achieve; in my experience, the best that can be hoped
  for across heterogeneous software is a count of pass/fail, and log
  files should be left for human investigation in case of failures

regards,
iustin

Attachment: signature.asc
Description: Digital signature


Reply to: