[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Video Capture and Streaming Battle Plan (was:Fluendo, DV capture & other bits and pieces)



Helmut Graf Von Moltke wrote:
> No battle plan survives contact with the enemy

that's the spirit! As promised, here's my battle plan, based on the
reconnaissance work undertaken. If people are happy with this in the
next irc meeting, I'll move it to the wiki for good.

Preparations
------------
We need to get the encoding machines (I'll code them encbox[en])
working, we'll want Sarge or Sid and

 - working firewire
 - working gigabit ethernet (ensure the path _is_ Gb)
 - kernel 2.6.12 latest & greatest
 - good X11/X.org video driver with low overhead when blitting a video
under Kino
 - /etc/modules :  raw1394 
 - init script that deals with 
   + chmod g+rw /dev/raw1394
   + (streaming-only) start the fluendo workers 
 - recompile the theora packages with the code from the theora-mmx branch
 - rebuild fluendo packages from ubuntu on debian
 - test that the firewire->fluendo->theora->fluendo->stream->player
chain actually works with the recompiled packages ;)

Camera/Mic/Encboxen:
 - Good FW cables, and perhaps secure the connectors in place
(disconnects on camera moves are killing me!)
 - 4.5m is the max allowed length for FW cables
 - The mic input must go through the camera! The consumer cameras we
have only have 1/8" mic inputs, we may need adapters
 - Tripods

The DV capture strategy
-----------------------
We will capture to HD. We will consume 78GB each day, assuming 5hs of
presentations @ 13G p/h. We will have a storage server with more than
1.6TB of space and Gigabit Ethernet between the storage server and the
encboxen. Even with Gb ethernet, we don't have enough bandwidth for 3
full DV streams -- and it'd be really brittle, LAN file storage isn't
meant for realtime stuff.

Transfer the captured videos will be transferred to the storage server
with a combination of network transfer (whatever the server offers,
NFS/SMB/FTP) and FW drive enclosures (which is faster). My estimate is
that transferring the captures from the 3 cameras (15hs video)
saturates a Gb link for a bit more than 15hs.

These transfers can be ideally automated. If they aren't too
burdensome, and we can automate with tools that do smart error
recovery (rsync for instance) we can run them while other captures are
happening, reniced to hell.

Camera operation and tools: for direct to HD capture, nothing beats
Kino. If does what dvgrab does, and will do rotation of files every
X-time or X-filesize (we can start the LAN transfer as soon as the
file is rotated). It has a good GUI, and shows the video being
captured, and I think we can get it to show the sound levels. It has a
big {record} button and a bit {stop} button. It even gives half-decent
error msgs if something goes wrong.

We will need to add instructions for camera operators to name the
files they are capturing according to the talk -- but that's as hard
as it gets.

And what do we do with 1.6TB of video? I don't know ;)

Several people have shown an interest in creating a nice video of the
conference. I think it'll be great if the do that, but I thing the
talks aren't the source they are after. I will bring my camera, and
I'm sure there'll be a few other cameras to shoot more interesting
stuff.

I'm hoping we can get cinelerra working on a high-end workstation so
they can capture and edit on it.

It is possible to process all this video nightly and create theora-ogg
and mpeg files to post next morning for people to download and watch.
However,  we are going to do that, we have very good support in
fluendo to do it. See my notes below.
 
Realtime compression and streaming strategy
-------------------------------------------

I am really happy with Fluendo to manage the compression and streaming
setup. Its model is to setup:
 
 - a "control server" that centralizes administration
 - each of the encboxen as "worker" machines which have an AV pipeline
 - a "streaming" server which receives all the streams from the
workers and serves them to clients
 - a "admin" app that connects to the control server remotely and
monitors & manages the setup

The roles can be split or joined. For instance for a demo Fluendo
isntall you just run everything on the same box. We can choose to have
all the streams come to a central streaming server or to be server
straight from the encboxen (to clients in the LAN, for example).

The pipeline can "tee" its output, going both to a local file and to a
remote server (and even to streaming clients served locally).

So the plan I have is:

 - The storage server acts as control server _and_ stream server --
taking advantage of the Gigabit networking (though it shouldn't need
it).
 - Each encboxen configured as worker processing the video stream to 1
set video quality (more if we have cpu to spare ;), saving the stream
to HD and  feeding it to the stream server. We can also offer an
audio-only stream.
 - People working on the video streaming run the admin app (which
connects to the control server) to monitor and manage the running
setup.

Note that streaming to end-users is optional -- we can use this setup
just to capture and encode in realtime, and be able to offer the talks
in video with a quick turnaround.

If we are doing all this realtime video streaming over the Gigabit
LAN, any transfer of full DV files will have to run with rsync
--bwlimit to privilege our RT traffic.

The streaming doesn't need to be started or stopped at the camera
level. Flumotion is unaware the camera op starting/stopping Kino, etc.
As long as there's something that looks like a FW video source, it'll
pick it up and stream it. It will be good for the camera op to have a
means of checking that the stream is coming through however --
possibly from another machine. (ask for ppl to tune into the stream if
you don't have a laptop).

We'll want to get the fluendo process started from init -- that should
make the machine pretty painless to operate.

The only tricky part is that Theora as packaged is pretty much useless
for real time encoding. We will need a hand-compiled (or packaged)
theora-mmx for realtime encoding. Unfortunately my fw box is a PPC (I
had the privilege of finding all sorts of strange Fluendo bugs on PPC)
so I haven't been able to build and benchmark theora-mmx.

Theora-mmx is next on my list -- just today I stole a FW PCI card from
the sysadmin's den at Catalyst.

cheers,


martin


--
To unsubscribe, send mail to debconf5-video-unsubscribe@lists.debconf.org.


Reply to: