[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Pluseaudio, speech-dispatcher, and console + graphical screen readers



Christian Schoepplein <chris@schoeppi.net> writes:

> On Fri, Aug 28, 2015 at 12:33:41AM +0200, Samuel Thibault wrote:
>>As discussed during DebConf15, we target enabling the accessibility
>>stack by default.  I've studied that a bit more, here are my thoughts:
>
> Diskussing accessibility of Debian should not only be reduced to the 
> issues you mensioned, Samuel, there are a few more things that need to 
> be worked out, I think.

> For example the fact, that the way pulseaudio, speech-dispatcher and all 
> this stuff needs to be configured to work propperly, when orca and a 
> console screen reader is used.

Of course, there is a ton of issues that need to be worked out.
However, this thread is trying to deal with a rather specific topic,
namely, pushing for default enabled accessibility stacks in graphical
toolkits.  I have changed the subject of your thread, to keep matters
seprataed and avoid hijacking this other thread, which is actually also
important to work on.

> I've installed a fresh Debian 8 a few months ago and it wasn't really
> easy and especialy for beginners this stuff isn't manageable IMHO. Now
> pulseaudio and speech-dispatcher are running in system mode and
> everything is working well, but that kind of setup is not wanted for
> security reasons.

I am tempted to agree with you that we need far too much manual
configuration interventions for certain accessibility features to
cooperate, just by my own experience.  However, could you be a little
bit more specific?  Particularily:

 * What sort of setup were you aiming to achieve?
 * What was particularily difficult to figure out?
 * What did you have to change, from the default, to get what you want?

> So what about these things? Since pulseaudio is used as default for 
> sound output, the barriers using Debian with some setups have been 
> increased...

I agree, pulseaudio has been a source of many bugs in the user
experience.
I myself remember several incidents where ALSA soft mixing (which is,
AFAIU, supposed to be the default) failing and pulseaudio effectively
preventing a console ALSA application from gaining access to the sound
card.  I admit that I was not investigative enough in this situations,
and mostly just ended up "apt-get remove"'ing pulseaudio.  I know this
is no solution, and I am guilty of just giving up on that front in the past.

The Linux audio ecysystem is sort of a nightmare.  For a long time, all
we had was OSS (/dev/dsp).  Then came ALSA, which resulted in a long period of
programs either missing ALSA support, or ALSA support not quite
working.  That was the time when you still were suppposed to buy a
soundcard which supports hardware mixing, such that you could have
several programs talking to the soundcard at the same time.  At some
point, ALSA soft mixing support was mature enough that some
distributions started to enable it by default.  However, Linux audio has
other audio servers, like JACK, and later PulseAudio.  JACK is for the
pro audio people, a really nice realtime audio routing engine.  However,
if you use JACK, you also loose ALSA soft mixing because the JACK server
opens the soundcard in such a way that soft mixing can no longer work
(or at least, usually does).  Once JACK was really stable and doing a
lot of things, and many Linux audio applications added actual direct
support for it, the PulseAudio team decided to do yet another audio
server, specifically targeted for the Linux desktop (whatever that may
be).  The JACK people were not happy, but PulseAudio was somehow pushed
into existance.  So now a Linux audio application is more or less
supposed to implement direct support for ALSA, and support for two
different audio servers, JACK and PulseAido.  If they are portable
across UNIXes, they are likely to have /dev/dsp as well.  This is just
unrealistic, so the reality is, that most applications have a subset of
support for these mechanisms.
And some of the scenarios a user might encounter are just
incompatible.  Such as using JACK as the primary audio server
while using other legacy ALSA applications at the same time.
It also appeared to me in several occasions that PulseAudio does
sometimes make legacy ALSA applications unusable.

We need a pulseaudio expert to chime in on this.  Or someone from our
group needs to become one.

From POV of the Accessibility Team, our goals are easily summarized:

 * Configure whatever audio system is in use automatically, loading
   necessary modules, starting user-space support and ensuring the primary audio
   output channel levels are up, *not* muted.  This needs to happen
   independent from the desktop in use, and should also work if no graphical
   environment has been started at all.

 * Make sure that the speech synthesis backend of the screen reading
   solution does not block any other audio applications.  In the
   simplest case, desktop sounds should still work while the speech synthesizer is
   talking.

We need to identify common use cases which currently have problems in
any of these areas.  If you know of any, speak up.

-- 
CYa,
  ⡍⠁⠗⠊⠕

Attachment: signature.asc
Description: PGP signature


Reply to: