[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Presentation of Debian Med on local Next Generation Sequencing workshop (April)



On Wed, 19 Mar 2014, Diane Trout wrote:
> > I think the more effort we spent in autopkgtest suites the more we will
> > be able to come closer to reproducibility.  I hope that this effort will
> > be rewarded by such projects who for whatever reason do not (yet) trust
> > our work.

> There's also the issue of upgrades changing your environment.

> We've had issues with newer versions of cufflinks and tophat introducing new 
> bugs. 

upon brief look: I have apt-get sourced cufflinks -- found no
unittests available

the same for tophat

I really hope that upstream does have some established procedures
to make sure their code works correctly.

Myself:  I would have thought twice (or at least checked with upstream
on details of above aspect) before using any of those tools.

As somewhat involved in scientific software development, I can say that
testing (unit-, regression- -- separate or in combination) is of
paramount importance for any project unless it is already 10 years
old, got tested by hundreds of users to verify "manually" its correct
operation, and no longer developed (thus no new bugs could possibly be
introduced).

And this aspect -- testing -- might be the one where anyone could
contribute.  But unfortunately it is rarely fun and definitely would
not be anyhow acknowledged in someone's CV/portfolio.  That is why
testing contributions are probably of the least number :-/  But may be
someone bright would come up with idea on how to improve our situation
on this front.

But also testing contributions might be the easiest ones to contribute
;) -- I call such tests "functional" tests:

- take your use-case code/pipeline 
- take some miniature simulated or real data (may be upstream project
  already has some for internal testing)
- run your code, get the results, add assertions
- contribute back "upstream".

In my case I am doing smth like that  with bug-reports users report
against PyMVPA:
http://github.com/PyMVPA/PyMVPA/blob/HEAD/mvpa2/tests/test_usecases.py
which serves us two ways: replicate original bug, and then assure that
not only it doesn't appear back (which could and often is verified by a
dedicated unit-test), but also that user-constructed pipelines are not
broken throughout our development.

P.S. on this note will try to introduce .travis.yml into one of the
projects on our radar for inclusion into Debian ;)

-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate,     Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik        


Reply to: