[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Checklist request (was: RFC: Deb 2.0 testing process)



>  For example, with the diff package:
> 
> Package: diff
>  - cmp works on identical and different binary or text files
>  - diff works on files, directories, normal or 2 column
>  - sdiff correctly merges two files
>  - diff3 correctly compares 3 files

It seems a shame to have to ask people to do this sort of thing.

It strikes me that one should be able to come up with a script that does a 
test of this sort in not much more that the time required to write the list (in
this simple case at least ;-)

I really think we should encourage people to do this where possible.

I also think that a reasonable way to proceed in the cases where automated 
testing is not possible, would be to write scripts that ask say:

  Do this test.....

  Did it work [y/N]

Another thing is that the tests or checklists that are written, should be 
testing for problems that have actually occured in the past.

For example, if the diff package has never failed to provide a cmp program 
that works as expected, there is little point spending time writing a 
checklist or script to test for this event, and then wasting valuable testers 
time running those tests.

We should look at this as a hunt for bugs that have occurred before, rather 
than an attempt to prove that everything is working.  Otherwise, it is easy
to fall into the trap of writing test that you know will work, but don't 
actually prove very much.

To take the diff example again, lets say that a bug that was resolved recently 
involved diff not noticing the difference between files that end with a 
linefeed and the same file missing the linefeed.  Since this is a bug that 
would have actually occurred, it is worth testing for, so we create:

/usr/doc/diff/TESTS:

  # diff test 1
  echo -n "Test File" > /tmp/difftest1
  echo "Test File" > /tmp/difftest2
  diff /tmp/difftest1 /tmp/difftest2 > /dev/null 2>&1 &&
    echo "diff:  test 1 failed"
  rm /tmp/difftest?

and so on, for each of the things that the maintainer knows to have gone wrong 
at some time in the past.  Not only does this test for the bug, but it also 
gives diff a workout that is likely to spot other bugs.

When new bugs are reported and fixed, the maintainer should be encouraged to 
add a test that fails on the pre-fix version and succeeds on the new one.

Please don't interpret this as criticism of the checklists idea, I'm just 
trying to make sure that effort expended in this direction is used as 
effectively as possible.

Also, I realise that we can publish checklists on the web much more quickly 
than we can persuade each maintainer to incorporate test scripts into their 
packages, so we should definitely have the checklists.  I'd just like it to be 
an interim measure, until the test scripts become a reality.

Does this make any sense to anyone else ?  If so we should probably start an 
effort in parallel to the checklists effort, to define a few standards for 
where to put test scripts, what to call them etc.

Some support programs for running the tests and submitting test results would 
be good too.

Cheers, Phil.



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
debian-devel-request@lists.debian.org . 
Trouble?  e-mail to templin@bucknell.edu .


Reply to: