[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Parche de la versión inglesa de /devel/debian-accessibility/software.wml



He corregido las etiquetas. Ahí va.

un saludo.
--- ingles-software.wml.orig	2006-03-01 23:42:34.000000000 +0100
+++ ingles-software.wml	2006-03-01 23:49:56.000000000 +0100
@@ -13,84 +13,84 @@
 
 <h2><a id="speech-synthesis" name="speech-synthesis">Speech Synthesis and related APIs</a></h2>
 <a11y-pkg name="EFlite" tag=eflite url="http://eflite.sourceforge.net/";>
-<P>
-  A speech server for <A href="#emacspeak">Emacspeak</A> and
-  <A href="#yasr">yasr</A> (or other screen readers) that allows them to
-  interface with <A href="#flite">Festival Lite</A>, a free text-to-speech
+<p>
+  A speech server for <a href="#emacspeak">Emacspeak</a> and
+  <a href="#yasr">yasr</a> (or other screen readers) that allows them to
+  interface with <a href="#flite">Festival Lite</a>, a free text-to-speech
   engine developed at the CMU Speech Center as an off-shoot of
-  <A href="#festival">Festival</A>.
-</P>
-<P>
+  <a href="#festival">Festival</a>.
+</p>
+<p>
   Due to limitations inherited from its backend, EFlite does only provide
   support for the English language at the moment.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="Festival Lite" tag=flite>
-<P>
+<p>
   A small fast run-time speech synthesis engine.  It is the latest
   addition to the suite of free software synthesis tools including
   University of Edinburgh's Festival Speech Synthesis System and
   Carnegie Mellon University's FestVox project, tools, scripts and 
   documentation for building synthetic voices.  However, flite itself
   does not require either of these systems to run.
-</P>
-<P>
+</p>
+<p>
   It currently only supports the English language.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="Festival" tag="festival"
           url="http://www.cstr.ed.ac.uk/projects/festival/";>
-<P>
+<p>
   A general multi-lingual speech synthesis system developed
-  at the <A href="http://www.cstr.ed.ac.uk/";>CSTR</A> [<i>C</i>entre for
+  at the <a href="http://www.cstr.ed.ac.uk/";>CSTR</a> [<i>C</i>entre for
   <i>S</i>peech <i>T</i>echnology <i>R</i>esearch] of
-  <A href="http://www.ed.ac.uk/text.html";>University of Edinburgh</A>.
-</P>
-<P>
+  <a href="http://www.ed.ac.uk/text.html";>University of Edinburgh</a>.
+</p>
+<p>
   Festival offers a full text to speech system with various APIs, as well an
   environment for development and research of speech synthesis techniques.
   It is written in C++ with a Scheme-based command interpreter for general
   control.
-</P>
-<P>
+</p>
+<p>
   Besides research into speech synthesis, festival is useful as a stand-alone
   speech synthesis program. It is capable of producing clearly understandable
   speech from text.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="recite" tag="recite">
-<P>
+<p>
   Recite is a program to do speech synthesis.  The quality of sound produced
   is not terribly good, but it should be adequate for reporting the occasional
   error message verbally.
-</P>
-<P>
+</p>
+<p>
   Given some English text, recite will convert it to a series of phonemes,
   then convert the phonemes to a sequence of vocal tract parameters, and
   then synthesis the sound a vocal tract would make to say the sentence.
   Recite can perform a subset of these operations, so it can be used to
   convert text into phonemes, or to produce an utterance based on vocal
   tract parameters computed by another program.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="Speech Dispatcher" tag="speech-dispatcher"
           url="http://www.freebsoft.org/speechd";>
-<P>
+<p>
   Provides a device independent layer for speech synthesis.
   It supports various software and hardware speech synthesizers as
   backends and provides a generic layer for synthesizing speech and
   playing back PCM data via those different backends to applications.
-</P>
-<P>
+</p>
+<p>
   Various high level concepts like enqueueing vs. interrupting speech
   and application specific user configurations are implemented in a device
   independent way, therefore freeing the application programmer from having
   to yet again reinvent the wheel.
-</P>
+</p>
 </a11y-pkg>
 
-<H2><A name="i18nspeech">Internationalised Speech Synthesis</A></H2>
-<P>
+<h2><A name="i18nspeech">Internationalised Speech Synthesis</a></h2>
+<p>
 All the currently available free solutions for software based speech
 synthesis seem to share one common deficiency: They are mostly limited to
 English, providing only very marginal support for other languages, or in
@@ -103,8 +103,8 @@
 web services, is it reasonable to require blind people interested in
 Linux to learn English just to understand their computer's output and to
 conduct all their correspondence in a foreign tongue?
-</P>
-<P>
+</p>
+<p>
 Unfortunately, speech synthesis is not really Jane Hacker's favourite
 homebrew project.  Creating an intelligible software speech
 synthesizer involves time-consuming tasks.
@@ -117,8 +117,8 @@
 logical groups such as sentences, phrases and words. Such lexical
 analysis requires a language-specific lexicon seldom released under a
 free license.
-</P>
-<P>
+</p>
+<p>
 One of the most promising speech synthesis systems is Mbrola, with
 phoneme databases for over ten different languages. Unfortunately, the license
 chosen by the project is very restrictive. Mbrola can only be distributed as
@@ -129,24 +129,24 @@
 given the restrictive licensing model of Mbrola, it cannot be used
 as a basis for further work in this direction, at least not in the context
 of the Debian Operating System.
-</P>
-<P>
+</p>
+<p>
 Without a broadly multi-lingual software speech synthesizer, Linux
 cannot be accepted by assistive technology providers and people with
 visual disabilities. What can we do to improve this?
-</P>
-<P>
+</p>
+<p>
 There are basically two approaches possible:
-</P>
-<OL>
-<LI>Organize a group of people willing to help in this regard, and
+</p>
+<ol>
+<li>Organize a group of people willing to help in this regard, and
 try to actively improve the situation.  This might get a bit complicated,
 since a lot of specific knowledge about speech synthesis will be required,
 which isn't that easy if done via an autodidactic approach.  However, this
 should not discourage you.  If you think you can motivate a group of
 people large enough to achieve some improvements, it would be worthwhile
-to do.</LI>
-<LI>Obtain funding and hire some institute which already has the
+to do.</li>
+<li>Obtain funding and hire some institute which already has the
 know how to create the necessary phoneme databases, lexica and transformation
 rules.  This approach has the advantage that it has a better probability
 of generating quality results, and it should also achieve some improvements
@@ -154,13 +154,13 @@
 resulting work would be released should be agreed on in advance, and it should
 pass the DFSG requirements. The ideal solution would of course
 be to convince some university to undergo such a project on their own
-dime, and contribute the results to the Free Software community.</LI>
-</OL>
+dime, and contribute the results to the Free Software community.</li>
+</ol>
 
-<H2><A id="emacs" name="emacs">Screen review extensions for Emacs</A></H2>
+<h2><A id="emacs" name="emacs">Screen review extensions for Emacs</a></h2>
 <a11y-pkg name="Emacspeak" tag="emacspeak"
           url="http://emacspeak.sourceforge.net/";>
-<P>
+<p>
   A speech output system that will allow someone who cannot see
   to work directly on a UNIX system.  Once you start Emacs with
   Emacspeak loaded, you get spoken feedback for everything you do.  Your
@@ -168,73 +168,73 @@
   that you cannot do inside Emacs :-).  This package includes speech servers
   written in tcl to support the DECtalk Express and DECtalk MultiVoice
   speech synthesizers.  For other synthesizers, look for separate
-  speech server packages such as Emacspeak-ss or <A href="#eflite">eflite</A>.
-</P>
+  speech server packages such as Emacspeak-ss or <a href="#eflite">eflite</a>.
+</p>
 </a11y-pkg>
 <a11y-pkg name="speechd-el" tag="speechd-el"
           url="http://www.freebsoft.org/speechd-el";>
-<P>
+<p>
   An Emacs client and an Elisp library to
-  <A href="#speech-dispatcher">Speech Dispatcher</A>.  It provides a complex
+  <a href="#speech-dispatcher">Speech Dispatcher</a>.  It provides a complex
   speech interface to Emacs, focused especially on (but not limited to) the
   blind and visually impaired users.  It allows the user to work with Emacs
   without looking on the screen, using the speech output produced by the
   synthesizers supported in Speech Dispatcher.
-</P>
+</p>
 </a11y-pkg>
 <h2><a id="console" name="console">Console (text-mode) screen readers</a></h2>
 <a11y-pkg name="BRLTTY" tag="brltty" url="http://mielke.cc/brltty/";>
-<P>
+<p>
   A daemon which provides access to the Linux console for a blind
   person using a soft braille display.
   It drives the braille terminal and provides complete screen review
   functionality.
-</P>
-<P>
+</p>
+<p>
   The following display models are currently (as of version 3.4.1-2) supported:
-</P>
-  <UL>
-   <LI>Alva (ABT3xx/Delphi)</LI>
-   <LI>BrailleLite (18/40)</LI>
-   <LI>BrailleNote (18/32)</LI>
-   <LI>EcoBraille displays</LI>
-   <LI>EuroBraille displays</LI>
-   <LI>HandyTech (Bookworm/Braillino/Braille Wave/Braille Star 40/80)</LI>
-   <LI>LogText 32</LI>
-   <LI>MDV braille displays</LI>
-   <LI>Papenmeier</LI>
-   <LI>Tieman Voyager 44/70 (USB), CombiBraille, MiniBraille and MultiBraille</LI>
-   <LI>TSI (PowerBraille/Navigator)</LI>
-   <LI>Vario (Emul. 1 (40/80)/Emul. 2)</LI>
-   <LI>Videobraille</LI>
-   <LI>VisioBraille</LI>
+</p>
+  </ul>
+   <li>Alva (ABT3xx/Delphi)</li>
+   <li>BrailleLite (18/40)</li>
+   <li>BrailleNote (18/32)</li>
+   <li>EcoBraille displays</li>
+   <li>EuroBraille displays</li>
+   <li>HandyTech (Bookworm/Braillino/Braille Wave/Braille Star 40/80)</li>
+   <li>LogText 32</li>
+   <li>MDV braille displays</li>
+   <li>Papenmeier</li>
+   <li>Tieman Voyager 44/70 (USB), CombiBraille, MiniBraille and MultiBraille</li>
+   <li>TSI (PowerBraille/Navigator)</li>
+   <li>Vario (Emul. 1 (40/80)/Emul. 2)</li>
+   <li>Videobraille</li>
+   <li>VisioBraille</li>
   </UL>
-<P>
+<p>
   BRLTTY also provides a client/server based infrastructure for applications
   wishing to utilize a Braille display.  The daemon process listens for
   incoming TCP/IP connections on a certain port.  A shared object library
   for clients is provided in the package
-  <A href="http://packages.debian.org/libbrlapi";>libbrlapi</A>.  A static
+  <a href="http://packages.debian.org/libbrlapi";>libbrlapi</a>.  A static
   library, header files and documentation is provided in package
-  <A href="http://packages.debian.org/libbrlapi-dev";>libbrlapi-dev</A>.  This
-  functionality is for instance used by <A href="#gnopernicus">Gnopernicus</A>
+  <a href="http://packages.debian.org/libbrlapi-dev";>libbrlapi-dev</a>.  This
+  functionality is for instance used by <a href="#gnopernicus">Gnopernicus</a>
   to provide support for display types which are not yet support by Gnopernicus
   directly.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="Screader" tag="screader"
           url="http://www.euronet.nl/~acj/eng-screader.html";>
-<P>
+<p>
   The background program screader reads the screen and puts the information
   through to a software Text-To-Speech package (Like
-  `<A href="#festival">festival</A>') or a hardware speech synthesizer.
-</P>
+  `<a href="#festival">festival</a>') or a hardware speech synthesizer.
+</p>
 </a11y-pkg>
 <a11y-pkg name="Speakup" tag="speakup"
           url="http://www.linux-speakup.org/speakup.html";>
-<P>
+<p>
   The kernel package
-  <A href="http://packages.debian.org/kernel-image-2.4.24-speakup";>kernel-image-2.4.24-speakup</A>
+  <a href="http://packages.debian.org/kernel-image-2.4.24-speakup";>kernel-image-2.4.24-speakup</a>
   contains a Linux kernel patched with speakup, a screen reader for the Linux
   console.  The special property of speakup is that it runs in kernel space,
   which does provide a little bit more low level access to the system then
@@ -242,131 +242,131 @@
   Speakup can for instance read critical kernel messages to you at a point
   where the kernel has already Oopsed, and no user space program could do
   anything useful at all anymore.
-</P>
-<P>
+</p>
+<p>
   Speakup currently supports the following hardware speech synthesizers:
-</P>
-  <UL>
-   <LI>DoubleTalk PC/LT</LI>
-   <LI>LiteTalk</LI>
-   <LI>Accent PC/SA</LI>
-   <LI>Speakout</LI>
-   <LI>Artic Transport</LI>
-   <LI>Audapter</LI>
-   <LI>Braille 'N Speak / Type 'N Speak</LI>
-   <LI>Dectalk External and Express</LI>
-   <LI>the Apollo2</LI>
+</p>
+  </ul>
+   <li>DoubleTalk PC/LT</li>
+   <li>LiteTalk</li>
+   <li>Accent PC/SA</li>
+   <li>Speakout</li>
+   <li>Artic Transport</li>
+   <li>Audapter</li>
+   <li>Braille 'N Speak / Type 'N Speak</li>
+   <li>Dectalk External and Express</li>
+   <li>the Apollo2</li>
   </UL>
 </a11y-pkg>
 <a11y-pkg name="Yasr" tag="yasr" url="http://yasr.sourceforge.net/";>
-<P>
+<p>
   A general-purpose console screen reader for GNU/Linux and
   other UNIX-like operating systems.  The name "yasr" is an acronym that
   can stand for either "Yet Another Screen Reader" or "Your All-purpose
   Screen Reader".
-</P>
-<P>
+</p>
+<p>
   Currently, yasr attempts to support the Speak-out, DEC-talk, BNS, Apollo,
   and DoubleTalk hardware synthesizers.  It is also able to communicate with
   Emacspeak speech servers and can thus be used with synthesizers not directly
-  supported, such as <A href="#flite">Festival Lite</A> (via
-  <A href="#eflite">eflite</A>) or FreeTTS.
-</P>
-<P>
+  supported, such as <a href="#flite">Festival Lite</a> (via
+  <a href="#eflite">eflite</a>) or FreeTTS.
+</p>
+<p>
   Yasr works by opening a pseudo-terminal and running a shell, intercepting
   all input and output.  It looks at the escape sequences being sent and
   maintains a virtual "window" containing what it believes to be on the
   screen.  It thus does not use any features specific to Linux and can be
   ported to other UNIX-like operating systems without too much trouble.
-</P>
+</p>
 </a11y-pkg>
-<H2><A id="gui" name="gui">Graphical User Interfaces</A></H2>
-<P>
+<h2><A id="gui" name="gui">Graphical User Interfaces</a></h2>
+<p>
 Accessibility of graphical user interfaces on UNIX platforms has only recently
 received a significant upswing with the various development efforts around the
-<A href="http://www.gnome.org/";>GNOME Desktop</A>, especially the
-<A href="http://developer.gnome.org/projects/gap/";>GNOME Accessibility Project</A>.
-</P>
-<H2><A id="gnome" name="gnome">GNOME Accessibility Software</A></H2>
+<a href="http://www.gnome.org/";>GNOME Desktop</a>, especially the
+<a href="http://developer.gnome.org/projects/gap/";>GNOME Accessibility Project</a>.
+</p>
+<h2><A id="gnome" name="gnome">GNOME Accessibility Software</a></h2>
 <a11y-pkg name="Assistive Technology Service Provider Interface" tag="at-spi">
-<P>
+<p>
   This package contains the core components of GNOME Accessibility.
   It allows Assistive technology providers like screen readers to
   query all applications running on the desktop for accessibility
   related information as well as provides bridging mechanisms to support
   other toolkits than GTK.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="The ATK accessibility toolkit" tag="atk">
-<P>
+<p>
   ATK is a toolkit providing accessibility interfaces for applications or
   other toolkits. By implementing these interfaces, those other toolkits or
   applications can be used with tools such as screen readers, magnifiers, and
   other alternative input devices.
-</P>
-<P>
+</p>
+<p>
   The runtime part of ATK, needed to run applications built with it is available
-  in package <A href="http://packages.debian.org/libatk1.0-0";>libatk1.0-0</a>.
+  in package <a href="http://packages.debian.org/libatk1.0-0";>libatk1.0-0</a>.
   Development files for ATK, needed for compilation of programs or toolkits
-  which use it are provided by package <A href="http://packages.debian.org/libatk1.0-dev";>libatk1.0-dev</A>.
-</P>
+  which use it are provided by package <a href="http://packages.debian.org/libatk1.0-dev";>libatk1.0-dev</a>.
+</p>
 </a11y-pkg>
 <a11y-pkg name="gnome-speech" tag="gnome-speech">
-<P>
+<p>
   The GNOME Speech library gives a simple yet general API for programs
   to convert text into speech, as well as speech input.
-</P>
-<P>
+</p>
+<p>
   Multiple backends are supported, but currently only the
-  <A href="#festival">Festival</A> backend is enabled in this package; the
+  <a href="#festival">Festival</a> backend is enabled in this package; the
   other backends require either Java or proprietary software.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="Gnopernicus" tag="gnopernicus"
           url="http://www.baum.ro/gnopernicus.html";>
-<P>
+<p>
   Gnopernicus is designed to allow users with limited or no vision to
   access GNOME applications.  It provides a number of features, including
   magnification, focus tracking, braille output, and more.
-</P>
+</p>
 </a11y-pkg>
-<H2><A id="input" name="input">Non-standard input methods</A></H2>
+<h2><A id="input" name="input">Non-standard input methods</a></h2>
 <a11y-pkg name="Dasher" tag="dasher" url="http://www.inference.phy.cam.ac.uk/dasher/";>
-<P>
+<p>
   Dasher is an information-efficient text-entry interface, driven by natural
   continuous pointing gestures. Dasher is a competitive text-entry system
   wherever a full-size keyboard cannot be used - for example,
-</P>
-  <UL>
-   <LI>on a palmtop computer</LI>
-   <LI>on a wearable computer</LI>
-   <LI>when operating a computer one-handed, by joystick, touchscreen,
-       trackball, or mouse</LI>
-   <LI>when operating a computer with zero hands (i.e., by head-mouse or by
-       eyetracker).</LI>
+</p>
+  </ul>
+   <li>on a palmtop computer</li>
+   <li>on a wearable computer</li>
+   <li>when operating a computer one-handed, by joystick, touchscreen,
+       trackball, or mouse</li>
+   <li>when operating a computer with zero hands (i.e., by head-mouse or by
+       eyetracker).</li>
   </UL>
-<P>
+<p>
   The eyetracking version of Dasher allows an experienced user to write text
   as fast as normal handwriting - 25 words per minute; using a mouse,
   experienced users can write at 39 words per minute.
-</P>
-<P>
+</p>
+<p>
   Dasher uses a more advanced prediction algorithm than the T9(tm) system
   often used in mobile phones, making it sensitive to surrounding context.
-</P>
+</p>
 </a11y-pkg>
 <a11y-pkg name="GOK" tag="gok" url="http://www.gok.ca/";>
-<P>
+<p>
   GOK [<i>G</i>NOME <i>O</i>nscreen <i>K</i>eyboard] is a dynamic onscreen
   keyboard for UNIX and UNIX-like operating systems.  It features Direct
   Selection, Dwell Selection, Automatic Scanning and Inverse Scanning access
   methods and includes word completion.
-</P>
-<P>
+</p>
+<p>
   GOK includes an alphanumeric keyboard and a keyboard for launching
   applications.  Keyboards are specified in XML enabling existing
   keyboards to be modified and new keyboards to be created.  The access
   methods are also specified in XML providing the ability to modify existing
   access methods and create new ones.
-</P>
+</p>
 </a11y-pkg>

Reply to: