[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Activation of Speech Dispatcher under squeeze



On 24.9.2010 02:11, Samuel Thibault wrote:
Root can output text to
ordinary users at will, nothing bars him from doing that either on the
linux console, or X11, etc.  Why should audio be different?

Are you speaking about the thing that he should be able
to connect to other users sessions and do everything
he likes here? I think it might be useful in some cases,
but I think it is not (yet?) supported in ConsoleKit.

This is not so easy as just writing audio to roots default
soundcard. The root would need to know which is the
proper audio device for the target user, and that is being
assigned dynamically in Pulse Audio, AFAIK.

I don't think it is Speech Dispatcher's task to, for every
user, query their current audio device setup, listen for
fallbacks on sink changes in PA etc. This sounds difficult!

I think it is much easier if Speech Dispatcher just uses
what is available in the given session and not care about
it too much.

If there is a requirement to have the root user do things
in this session, because he is a superuser, I think it's better
to solve this outside Speech Dispatcher, by allowing him
to become part of this session for a moment.

The contrary is however true: a normal user shouldn't be able to output
sound if he's not in front of the machine.  Only root should be able to.

Yes.

I think BrlTTY needs to have a better notion of users and sessions
and then it will interconnect well with Pulse Audio, DBUS and the
other desktop technologies.
How do you log in?  Which user is supposed to provide the audio
feedback?

In discussions on the Pulse Audio mailing list was proposed
a concept of an 'idle session'. This is a session that exists on the
local computer in case no other user is currently claiming any
active session there. I think this needs to be precised though.

Speech Dispatcher is just an intermediate who can't do much about it.
Well, the issue is that we're here mixing unix users with physical
users. What we really want is to identify the real user, while
pulseaudio/dbus/brltty can only deal with unix users.

To make it more precise, PulseAudio/DBus deals with sessions,
not users. I still think this makes situation simpler, if we consider
all the relations and consequences (as I've pointed one above with
the audio sinks migration). Software components such as Speech
Dispatcher simply don't have to care about all the complexities
of various attributes of the user sessions, of security policies between
them etc.

And another issue of course is that to actually be able to read the linux console you
need to be root.

Only if you directly open and own the /dev device. We know all
the problems related with this. It would be much better to use
HAL/udev for the access (I'm not very familiar with current state
of hardware access technologies), and then if the policy is set properly,
every user could see his own and BrlTTY would not need to run under
root.

To prevent concurent speech, in the users session model, Speech
Dispatcher will use ConsoleKit in the 0.8 release. This is not to
say that only one will be active at a time. As many will be active
as many active sessions there are (same computer, multiple seats etc.)
Does consolekit handle sound cards associated with keyboard/mouse/screen
sets?

I don't think Speech Dispatcher needs to care about this. This is
task for Pulse Audio and I think they try to resolve this properly.
ConsoleKit itself has a notion of "seat", which I think is what you
are talking about.



GDM runs in its own session, so there is no problem. Speech Dispatcher
gets started, when GDM tries to use it.
What about console screen login?  We'll have to patch
login/agetty/mgetty/*tty, as well as kdm/xdm/...?  It's really painful
that it has become an _obligation_ in nowaday's pulseaudio mind.

These are already patched upstream to give feedback to ConsoleKit, AFAIK.

Work is also needed in BrlTTY and Speakup.
Report is needed to actually get anything done.  AFAIK, neither BrlTTY
nor speakup people were involved in any pulseaudio discussion about all
this.

Neither we were. Only after Pulse Audio was introduced and Speech
Dispatcher basically stopped working for many users. This is sad
of course, and I still think that Pulse Audio was introduced
in distributions prematurely. I apologize for the inconvenience
to our users, but we've basically got over it by now in Speech Dispatcher.

Though that premature change made us quite angry, I must admit
that its path is correct. It is not new to Pulse Audio, DBUS has taken
it much before.

So yes, I think we need to coordinate our efforts between
the accessibility projects to work towards the same goal.
If we manage it, at the end, our technologies will be much
improved.

On the Speech Dispatcher mailing list, we are already working
on the roadmap for Speech Dispatcher. Do you think it would make
sense to join together with our developers and develop a broader
roadmap, including BrlTTY and Speakup?

Another thing is, we will get to talk about this on the AEGIS
conference early next month, where we will get to speak with
the Gnome accessibility developers about it.

Best regards,
Hynek Hanke



Reply to: