[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: testing "testing" (was: Implementing "testing")




Tom Musgrove <TomM@pentstar.com> writes:
TM> Would it be possible/feasible to set up a script that would run Lint,
TM> SLINT, etc. (Perhaps a pretty printer as well, that checks for
TM> inconsistent white space (which generally implies the programmer meant
TM> one thing and typed another...) ) and then files related bug
TM> reports?

David Z Maze <dmaze@INDIANA.MIT.EDU> replied:
DM>Oh dear.  Automated processes filing bug reports is probably a bad
DM>idea.  What if I disagree with lint's assessment of my package's
DM>source?  What if I haven't fixed the lint warnings since the last time
DM>your job ran?  What if there are different warnings?

First, I would have the script check for anything that it has
previously filed a report on.  The message should be almost exactly
the same, so the query should be easy to compose and should find the
report if one has been submitted.  If a response has been made to the
report.
Then action is taken depending on the response (For instance FIXED would
result
in a report and change of status back to UNRESOLVED).  If no response has
been
made, then it would add information to the bug report (still exists in
build foo.bar.xxx-yyy).  If you don't want to fix it or what ever then that
is
fine, you'll get the one bug report, plus the additions that it isn't fixed.
Or change the status to WONT FIX, and it will quit bugging you.

I'm more interested in it running SLINT than LINT (SLINT is Security LINT),
although I think it is proprietary to L0pht (does an open alternative
exist?).
There are probably other testing and/or code review software out there that
would be valuable to run, these were just the two that popped to mind.

I'm not sure what you mean by "What if there are different warnings?"  Do
you mean multiple warnings from different bugs, or possibly multiple
warnings with the same bug?
I agree that this could be a problem - I'd suggest referencing the bugs in
the same report (If there are three hindered errors, nitpicks, whatever,
then we probably don't want to file three hindered bug reports, because
there is probably just a missing colon, or file somewhere...).

DM>[...]Instead, developers are encouraged to run
DM>lintian before uploading their packages; support is in debhelper to do
DM>this, and dh_make causes it to be done by default.  Lintian-clean is
DM>good, but not required and not automagically checked for.

I would agree that encouraging developers to use these tools is good.
Automagic is for conveniences sake, or for the case of SLINT, not every user
has a copy...

DM>Additionally, I wouldn't want to be held responsible because I
DM>maintained a package that had ugly/inconsistent/unsafe code.  Is it
DM>the package maintainer's responsibility to, say, rewrite all of GNOME
DM>because its source is ugly?  (Well, uncommented, but that's close
DM>enough.)

If the bug is low priority, then it can be marked as such.  Just because a
report
has been filed, doesn't mean you have to fix it.  This is intended to give
more
information for the programmer/maintainer, not be a burden. I'm curious how
many
people have access to a tool like SLINT, or regularly use memory shakers,
or random input tools.  My guess is not many, mostly due to lake of time -
if it is
done automagically, then they get the information without the sacrifice in
time.

DM>This isn't to say that I think automated testing is a bad idea.
DM>Pointless testing, like the following:

TM> After a successful compile, it could also run the program and
TM> use a memory shaker, and/or software that gives the program random
TM> input, and again files a bug report for any crashes.

DM>isn't necessarily the best idea.  We don't want to set up a situation
DM>where a package passes all of the automated tests, but this just means
DM>it's lint-clean, lintian-clean, and doesn't barf on random input, not
DM>necessarily that it works correctly.

As in life, passing all of the tests doesn't imply correctness or ability,
just that one can pass the test.  Failing the test however can generally
point to potential deficiencies.  Memory shakers and random input doesn't
beat a good design/code review, but they often give strong hints as to where
you need a good design/code review.

TM> (PS Please CC me in all replies since I am not subscribed to this list)

DM> (If you're going to make a proposal like this, wouldn't you at least
DM> like to hear the discussion on it?)

I've subscribed now :)  I just hadn't at the time of the proposal, yes I
would like to hear all of the discussion.

Thanks for your comments,

Tom M.
TomM@Pentstar.com



Reply to: