[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Preparing for data and workflow testing



Hello,

I keep thinking about "what's next" and "how should we reach out to
professionals in the field". Well, I mean, those who are not already
with us. If our audience is medical doctors then they need a proof that
our packages

a) just work
b) their own data fits in easily.

For a) I see a set of routine workflows that we run. For b) I see tutorials.

Other than I first thought, a) and b) can evolve independently. I am not
sure yet about what I want a tutorial to look like - a wiki page plus a
YouTube video maybe? I am not sure about a) either. In my current
mindset this is a Debian package with data and a routine to test it.
What Debian yet cannot do is to allow for a manual inspection of test
results, right?  The pigx workflows for instance generate HTML documents
which many will find appealing and us running analyses and many
different browsers inspecting them would give some extra confidence -
both technically and for the semantics behind the analyses, i.e. maybe
we manage to run different workflows on the same data and some of our
eyeballs find biologically relevant differences between these analyses.

On the spreadsheet I added a new tab "Data" where I hope to gather ideas
on how decide on data sources with which we feed the workflows. When we
combine a set of data sources with a workflow then we have an analysis,
right? And then we could have meta analyses that just depend on multiple
analyses packages to perform automated comparisons?

So, I don't really know where this is going, yet. I suggest whoever is
interested to feed some thoughts please leave some footprints on

https://docs.google.com/spreadsheets/d/1tApLhVqxRZ2VOuMH_aPUgFENQJfbLlB_PFH_Ah_q7hM/edit?usp=sharing

Best,

Steffen



Reply to: