Experiment to gather information about prospective packages (Was: [Blends-commit] r3275 - /blends/trunk/machine_readable/fetch-machine-readable)
- To: Charles Plessy <email@example.com>
- Cc: Debian Pure Blends List <firstname.lastname@example.org>
- Subject: Experiment to gather information about prospective packages (Was: [Blends-commit] r3275 - /blends/trunk/machine_readable/fetch-machine-readable)
- From: Andreas Tille <email@example.com>
- Date: Fri, 6 Apr 2012 15:46:42 +0200
- Message-id: <[🔎] 20120406134642.GA28498@an3as.eu>
- In-reply-to: <20120406113009.GB19343@falafel.plessy.net>
- References: <E1SG2KR-00054w-NR@vasks.debian.org> <20120406113009.GB19343@falafel.plessy.net>
[Charles pushed me a bit to be more verbose about current work. I
intended towrite some announcement together with the citation data
*after* I'm back online in the end of next week, but here we go.]
On Fri, Apr 06, 2012 at 08:30:09PM +0900, Charles Plessy wrote:
> can you share your plans on a mailing list ? I do not want to make duplicated
> work on my side.
Yesterday night I had some idea how we could reduce data duplication in
our tasks files even further. For the moment it was just an experiment
but I wanted to commit early. My plan was to first see whether the plan
makes sense and will work out before reporting, but once you are asking
I foreward to you what I wrote to firstname.lastname@example.org (seems this is
not archived somewhere - I was under the impression that this is a
public list). I noticed that in neurodebian some git repositories are
missing a master tag and so my gathering attempt failed. I think this
is enough explanation for my work for the moment - more will come once
I consider the approach reasonable enough to create an UDD table featuring
information about prospective packages.
BTW, the *preliminary* (=unchecked,untestet,potentially wrong, ...) result
can be seen here.
thanks for your quick response.
On Thu, Apr 05, 2012 at 10:44:43AM -0400, Yaroslav Halchenko wrote:
> because upstream has it there... I guess we should then replicate those clones
> on alioth as well for your scripts to pick them up, right?
This would make sense. For the moment I would liket to stick the
investigation to Alioth.
> On Thu, 05 Apr 2012, Yaroslav Halchenko wrote:
> > just for completeness:
> > we extract changelogs for everything we upload to NeuroDebian (and that is a
> > bit more than Debian, e.g. perspective packages) under
> > http://neuro.debian.net/debian/extracts so it might even be more
> > up-to-date/complete if you need just changelogs, BUT it would also contain
> > automagic backporting changelog entry for NeuroDebian and packages we do not
> > maintain ourselves but just backport/build when needed
The backporting does not really matter. From changelog I would like
to obtain just the source package name via dpkg-parsechangelog and
in case it might be a (prepared!) initial upload the WNPP bug number.
I'm more concerned about the information in debian/control and may be in
copyright to fetch the license of "Files: *". Moreover I'm keen in
debian/upstream files - it seems that the upstream gatherer is not yet
fully functional and it might not harm to seek for alternatives which
might later be merged into this effort.
The motivation to gather those information is simply inspired by further
simplification of the tasks files: After having found a way to get rid
of the Published-* fields I thought about a way to get all info about
preliminary packages right out of Vcs. In the tasks files we have
WNPP bug (see changelog), Vcs fields - I'm just fetching from Vcs and
will try to verify, Homepage, Description. Everything could be there if
the work in Vcs has just started. So I would try to gather all this
information from Vcs for all those packages which are *not* in Debian
> > Andreas -- please let us know if having them creates any kind of obstacle for
> > you
I just restarted the job which uncovered the problem and will report
in case some issue might be left.
> > pysurfer.git
> > fixed (WiP, not yet in Debian)
That's actually the kind of information I'm keen about.