[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Extending accessibility support in D-I for Lenny

John Heim <jheim@wisc.edu> writes:

> From: "Mario Lang" <mlang@debian.org>
> > I am aware of these problems, and that is exactly why I was
> > never in favour for a speakup-patched kernel by default.
> > The way speakup currently is implemented is IMO not suitable
> > for the average server system, it is too hackish.
> Is there a solution then?

Sure, there are several.  We are just discussing what is
currently available, and what is the status of these projects to get
a more clear picture on what direction to take.  Manpower is rare,
and it is a good idea to try to spend it wisely.

> I don't mean this as a criticism -- by no means. But maybe it would
> be more productive to talk about what can be done rather than what
> cannot.

Well, talking about what can be done always involves the
danger of discovering that something can't be done just yet.

I understand you are trying to defend a existing solution, but if we want
to integrate accessibility into the linux mainstream, we have to be careful
the solution we choose is going to be acceptable to the rest of the world.
speakup has been described as the holy grail to linux accessibility
by its users many times in the past, and I actually see where the
intusiasm comes from.  This thread (or my posting to it) just tried to clarify
why the speakup patch is a little bit too intrusive for mainstream integration
just yet.  This is not a criticism on the functionality provided
by speakup.  Its a criticism on the particular implementation.

One outcome of such a discussion could be that the actual
problems of speakup get addressed by someone.  As I've mentioned, such
a thing was at least theoretically planned in the past, and IMO it
would be desireable to spend manpower in that direction, instead
of trying to maintain separate speakup-patched kernels in all
sorts of distributions.  If no one steps up to get speakup integrated
into linus kernel tree, I guess speakup will stay a niche solution
for some people, with all the problems attached to such a situation.

> It would be really helpful for blind people to be able to walk up to
> any linux  machine, connect up a speech synth, and start typing
> away.

Well, there the problems start.  Nowadays, hardware speech synths are not
widely used, and not manufactured anymore.  I had a very hard time finding one
to buy that was supported by speakup in 2004, I doubt that situation improved.
In fact, I remember the only commercially available sort-of
newish hardware synth that offers an USB interface was
not supported by speakup for a long time (I guess it still isn't),
mostly because of lack of hardware specs.  Since speech synths
aren't exactly rocket science, and there is software available
to log USB transfers on Windows, I guess another reason
for this type of hardware not being supported was
a lack of interest by the core developers capable of implementing it.
(Reverse engineering USB devices is pretty simple, at least
compared to other things.)
To base accessibility support on out-dated technologies
is surely not a good idea.  I agree people should be able
to use their existing hardware, but default use cases are
definitely going to be soundcard based.

All technicalities aside, we of course agree that a blind
user should be able to walk up to any kind of
linux machine and just use it.  No matter if they want
to use their braille display, their good ol' hardware speech synth
or headphones connected to the computers soundcard.

> I've put speakup enabled kernels on many of the servers I deal with
> but some that were setup by other people or by my predecessor don't
> have it. So if those machines go down, I have to get help working
> with them. And that's not good for my job security.

That is exactly why speakup should be merged into linux mainline.
Or some equivalent functionality has to be integrated into
user-space on different distributions.

In fact, it would be an interesting project to write a /dev/vcs based
user-space screen reader for speech that can be used to review virtual
Such an application does not exist right now, mainly
because speakup is providing that level of detailed access
already.  And here the cat bites its own tail.  No progress is really made.
A user-space speakup-alike screen reader would also have
the advantage of being more easily maintainable source-code-wise.
kernel code is not really trivial to write.

> I don't really think that I personally have to worry about my job
> security. But this is an important issue. California did a survey a
> few years ago that showed that 70% of blind people were unemployed.
> So I would think that there is hardly anything you guys could do that
> is more important than making linux talk.

We need to get users of speech only involved in the development process more
to speed it up.  Volunteers still mostly scratch their own itches, and since
braille displays are standard for blind people around where I life,
speech only is really very low on my priority list, no
matter what you think should be important to "us" :-).

P.S. Reflecting on my experiences in the Blind Linux Users' community
in the last 10 years, its kind of sad to see the user
base split up into two groups.  The braille users, and the speech
users.  There is very very little overlap, which is sad, since
the typical user of accessibility on a Windows machine
around here in Europe always uses both at once, a braille
display, and speech feedback.  This sort of unification
of the two modes of working is completely lacking in linux console
mode, it only recently came into existance with applications like
gnopernicus and orca, while those have limitations of their
own given the architecture they are based on.  I only know
one screen reader, sbl, that offers speech review and braille display
review features at once.  AIUI, sbl was never really a community
project, and mostly got its user base from SuSE users.  I am not
sure it is actively maintained right now.  Also, given its suboptimal
code quality, I would not consider it a future proof solution right now.
Recently, there have been discussions to adopt such features into brltty.
In essence, this (if well done) would offer a workable
solutions for everyone, speech, braille, and braille and speech
users, all in one user-space daemon.
I would really like to see this happen, but I am afraid
things will progress slowly since most brltty developers
don't really use speech output much, so we've got the
old problem again.

If someone was inspired by this mail to go and get
some coding work done (I am an optimist, I know, but I dont give up) I
think I could point you in some useful directions. Feel free
to drop me a note.

  Mario | Debian Developer <URL:http://debian.org/>
  .''`. | Get my public key via finger mlang@db.debian.org
 : :' : | 1024D/7FC1A0854909BCCDBE6C102DDFFC022A6B113E44
 `. `'
   `-      <URL:http://delysid.org/>  <URL:http://www.staff.tugraz.at/mlang/>

Attachment: pgpluRQqwZRb6.pgp
Description: PGP signature

Reply to: