[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [Neurodebian-devel] How can Blends techniques be adapted to NeuroDebian

On Fri, Aug 24, 2012 at 08:58:23AM +0200, Andreas Tille wrote:
> > I believe that we would benefit most at the level where all the
> > package-information that is present on the blends pages is processed to
> > a degree where it is ready for insertion into a (language-agnostic) page
> > template. In the blends pages case the is HTML, in our case this is RST.
> The tasks (and bugs) pages of the Blends web sentinel are actually using
> Genshi templates (to finally produce HTML).  As you know I considered
> some way to export RST via Genshi but this does not seem to be possible
> directly.
> > If we can establish this layer as an interface, we could write/change
> > our code to create a portal like neuro.debian.net from this information.
> This is what we discussed in Grenoble.  Due to my experiments Genshy
> just outpust some xml and you either need to postprocess the data to
> create rst or you just need to choose an alternative templating system.
> What I do have is some Python code that holds the Blends data.  If there
> is no reasonable templating system I could write an rst_output method
> which spits out the data you need to a file.


> I think if you don't know a fitting templating system I could write a
> rst_output method and we might proceed from here.

I think we should not decide on a templating engine for the purpose of
an interface for portal builders. Whatever decision we are going to
make, the next blend will needs something else. I'd prefer to store the
_input_ into whatever templating engine instead. In other words,
something like a JSON (or similar) structure that holds the description,
screenshot link, available versions, architectures per version, ... for
each package. And that maybe wrapped into a larger structure that has
all the packages per task (per blend).  If the information is broken
down to a level where no markup in the respectives values is required,
we should be able to feed that into ANY templating language and should
hence be relatively future proof.

If we have all the information aggregated in this way, we could even
decide at some point to put it all back into a DB (if access latency and
such ever becomes an issue). Since you must have all relevant information
already represented in some form to be able to feed genshi, I'd assume
that it should be relatively straightforward to export it right at this
point, or am I wrong?


Michael Hanke

Reply to: