[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Using i18n frameworks for a common workflow



Dear publicity team,

Donald reached out earlier and prompted me to finally write down
what has been on the top of my head for a while regarding a possible
workflow for the publicity team to get on top of

  - ensuring that we don't miss out on stories
  - identifying the appropriate output channels
  - and pushing to those.

Be warned, this is highly conceptual and I have no idea how workable
it is, but I feel motivated to jot down the principles. I am also
not really an active member of the team at this point, so you can
also just ignore me.

Let me illustrate my idea in form of a walkthrough, so that we don't
waste too much time on abstracts.

0. A story (a tidbit, an information, a news item, anything…) is
   identified, e.g. because someone mails
   submissions@press.debian.org, mentions something on IRC, or one
   of us stumbles over a webpage, microblog, or whatever other
   channel.

1. Taking this story and entering it into our workflow is the first
   step. This could be done e.g. by means of creating a file in Git
   under the incoming/ directory. The file could be RFC-822-like and
   collect metadata as well as free-form text and notes.

   Some mechanism informs the team about new submissions, or even
   pings the team daily while there are any files in that folder.

   Ideally, stories get expiration dates after which they get moved
   to expired/.

2. Someone analyses the story and identifies the scope of its
   intended publication. Is this something that should go onto
   a blog? Onto bits.d.o? Just Twitter?

   Each of the available output channels (i.e. all the channels we
   are responsible for) is given one of four ratings: MUST, SHOULD,
   COULD, SKIP. I'll get back to those four ratings shortly.

   The story is then moved from incoming/ to drafting/.

3. Now comes the i18n-hack. The file created in (1.) above is
   regarded as the source file, en_SOURCE, and we leverage standard
   i18n-tools to create "translations" of the data in the file for
   each output medium. So e.g. en_BLOG would become a blog post,
   en_TWITTER a 140 character abbreviation of the story, and es_BITS
   a Spanish bits entry (just as an example).

   … en_BLOG translations could auto-generate en_TWITTER and
   en_PUMPIO translations, and en_PUMPIO could default to en_TWITTER
   to save time. I am sure we could come up with plenty of
   optimisations here.

4. Once all channels identified as MUST for a story have
   translations (and those translations have been proof-read and
   signed off (could be defined as a requirement for each channel)),
   the story can be moved from draft/ to frozen/, manually or
   automatically, or (e.g. if not all SHOULD translations exist
   yet), then on the planned publication date, dragging any COULD
   stories along, if these exist. Don't worry if this sounds
   complex. It's all just brainstorming.

   This move triggers automatic pushes of each translation to the
   destined output channel, or reminders of people to do the manual
   publication.

5. A translation is moved to published/ once it's pushed to the
   public, and it should be trivial to also retire the source file
   and all associated translations when all MUST/SHOULD translations
   have been pushed.

This workflow gives a couple of benefits, while not being overly
complicated, IMHO:

a) It's easy to keep track of stories that have just come in and
   which need to be targetted at output channels;

b) Metadata along with stories allow for all kinds of automation and
   informational queries.

c) It's easy for the team to see what needs to be done, apart from
   targetting new stories, as the i18n-tools can be used to quickly
   identify

   - missing translations
   - out-of-date translations, i.e. when the source story has been
     changed

   using the mechanisms translators use for their work.

d) Publication of articles can be automated, individually for each
   medium, but nothing depends on any single channel, or whether
   publication here already happens automatically or manually.

What do you think?

-- 
 .''`.   martin f. krafft <madduck@d.o> @martinkrafft
: :'  :  proud Debian developer
`. `'`   http://people.debian.org/~madduck
  `-  Debian - when you have better things to do than fixing systems
 
"the unexamined life is not worth living"
                                                             -- platon

Attachment: digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Reply to: