BoF remote participation
Last Thursday I met up with Konstantinos, Wookey and Codehelp in a pub
in Cambridge, UK [0]. The conversation got around to Wookey’s recent
email thread WRT Debconf Video [1] about how to improve video streaming
for remote Debconf attendants.
Some points of note / things we discussed:
* This is about BoFs but could also be applied to Q&A at the end of a
session
* Latency is a factor of the video camera, SDI capture, Mixer Process,
Output Encoding, Transcoding and transport process as well as the buffer
in the client machine viewing the video.
* For a conferencing system the return stream, consisting of camera,
encoder, is out of our control, this is followed by transport before
arriving at video team controlled hardware for decoding and mixing.
* The latency of our streams is currently about 20-30 seconds from
source to browser window. That means we would expect a 40-60 second
round trip for a participant, clearly this is not suitable for a “Real
Time discussion” that BoF is intended to provide.
* Typing is also just too slow, teams already have web meetings and it
is understood that a BoF needs to be more interactive – simply running
the BoF on a video conferencing system wouldn’t work either because [a]
we want to record it [b] we want to stream it to a wider audience [c]
automated conferencing systems do not, at least on the free to use or
FLOSS based solutions, scale to more than just a few users.
* One of the reasons that Debconf Video archives a better final video
than some other FLOSS videos is that we utilise a large number of people
in the process. A typical room uses someone on vision mixer / director
role, two camera operators, someone on the sound desk and then ideally a
room host and a couple of ‘mic runners’. We ask the people performing
the sound & vision mixing roles to monitor IRC and ask any questions
arising.
* Regularly we do not have enough volunteers to fulfil all the required
operator roles – falling back to one or two people to fulfil all the
roles, and this inevitably means that not all tasks get the attention
they need.
* We prioritise participation from within Debconf than from outside.
* Presently we try and record presentations from within 3 rooms: the
main talk room, a 2nd talk room and a 3rd room that may well include BoF
(Bird of a Feather) discussion.
* The Debconf Video Team is in the process of ‘equipping’ hardware for
the Talk rooms and will then move to BoF.
* Recording BoF discussions is hard – the interaction between many
participants in a room means that there is a lot of camera switching and
mic running.
* As a team we have prioritised Presentations over BoF because they have
a wider audience AND are the simplest to achieve.
We broke the discussion down to how we could tackle the problem of
interactive BoFs
* Broadcast / Streaming presentations (i.e. existing Debconf Video)
don’t worry about latency – 1:many distribution systems can improve
video & sound quality for the SAME bandwidth by processing a larger
buffer (multiple frames worth) whilst a conference system sacrifices
quality to achieve a lower latency.
* Latency from source to the output of our video mix is perhaps a couple
of seconds. Still not “Real Time”, but if we could achieve a sub 2
latency in each direction this would be a usable system.
* Ideally a BoF would need less operators than a Presentation because as
already stated we struggle to have enough volunteers to cover all the
existing roles.
* It would be acceptable to separate BoF ‘interactive Participants’ from
‘Viewers’
- ‘Viewers’ would use the existing Debconf streaming service (and
this is how we would record and archive)
- WebRTC might be a way to achieve a reduced latency for ‘participants’.
* It would be acceptable that ‘Interactive Participants’ would be
invited / request access BEFORE the start of a BoF – typically these
would be members of the Team associated with the BoF discussion who are
not physically present at a given Debconf.
* WebRTC feeds can be multicast to limit the required bandwidth
Possible Roadmap to a solution:
[A] Proof of concept
- Add a WebRTC output module to Voctomix – this provides a low latency
output stream
- Add a WebRTC input module to Voctomix limit to 2 instances – one each
for invited participants
- Voctomix WebRTC input module is selected (the same as selecting a
camera feed), however this also includes the WebRTC audio as well
(preset audio mix with the main audio input?)
- Participants need to know magic IP addresses / ports to use
[B] WebRTC conference server - Source Matrix (Alpha 1)
- Build a (cloud based?) server to act as WebRTC endpoint for
participants. Two video feeds are sent from this server to the Voctomix
PC. One containing the matrix of all participants the other the
currently selected participant.
- When a participant wants to talk they “press a button”, this
highlights the boundary around their screen in a primary colour, and
hence also around the edge of their thumbnail video in the matrix
(perhaps with a position in the queue of people wanting to talk being
displayed in a corner?)
- If the participant presses the button for a 2nd time they remove
themselves from the queue.
- Control of the matrix could be done from a web interface running on
the BoF host’s laptop (i.e. The laptop connected to the OPSIS module)
this way the matrix could be shown on the presentation screen as well.
Control could also be done from a web browser running on the Voctomix
PC, i.e. there would need to be support for 2 controllers.
- Participants need to know the URL for the appropriate server
[C] Harden / Make public (RC1)
- Web client / Management interface for C&C system.
- integrate into default room setup
- management for a room from ‘known Voctomix PC’ to grant access to a
BoF host (for duration of a BoF/Talk)
- Participants use Debian SSO system to authenticate as a client linked
from Debconf website for each room (without Sign on they can still view
the feed but can not participate)
- Add ‘Test my feed’ button to confirm that the client is able to send
video / audio to the ‘WebRTC server’ (perhaps make this mandatory before
you can join the matrix of participants?)
Who will do this work?
This need not be done by the video team at large – Konstantinos & Wookey
are volunteering to make this happen and I shall test and demonstrate
the system with a view to adding it into the Debconf Video systems if
the demonstration of proof of concept and Alpha1 achieves a usable
solution that is positively received.
Timescales?
Target Proof of concept by end of the year and perhaps trial at a
Mini-Debconf this year...
/Andy
[0] 571d5500-9ffa-0cae-d68d-76fa4a8eee90@debian.org
[1] 3b058f4f-72db-e7ed-0426-78ecef1310da@debian.org
Reply to: