[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: quo vadis, video team ?!



On 6/27/05, Holger Levsen <debian@layer-acht.org> wrote:

> > I am still working on the Flumotion track, it's going well --- I will
> > soon be preparing some theora-mmx packages.
> 
> buildable from sarge sources ? (as we might need them on amd64 as well..)

The Hoary package is very similar to the Sarge package, I guess I'll do both. 

> > I tried to get flumotion
> > running ok on Sarge or on Sid, but it's been an enomous timesink.
> > Flumotion needs a toolchain based on Python 2.4 and our freeze caught
> > us with a good 2.3 toochain.
> 
> ah. So no flumotion or hoary ? Or backports for sarge. I don't wanna use
> neither etch nor sid nor breezy :-)

I don't see many backports for sarge out there. The Python 2.4
toolchain isn't there in etch/sid either. I was trying to port the
different bits I needed, but it is a huge job, methinks.

> What's the flumotion infrastructure ? Which machines in the row/chain ?

Sorry! I meant the laptops. If we do streaming, we'll need a
web-reachable server also running Flumotion.

> > As soon as the Python toolchain moves forward on Sid/Etch, flumotion
> > will fall into place. But we have big fish to fry *now*.
> 
> Yep. 10 days left ;-)

Yes. Oh, by the way, I'm getting there midday of the 7th, rather than
the 8th. An extra day to prepare ;)

> > We need x86 CPUs around 2GHz for flumotion video capture and encoding.
> 
> Please have a look at the machine list at the bottom of
> http://layer-acht.org/fai/fai-at-debconf5/ and tell me what's missing
> (especially if machines are missing / not dedicated to us) and which infos I
> need to add.

Do we know anything more about the laptops? I looked briefly at
toshiba's website, and the lowest-spec'ed laptop they offer is 1.8
GHz, which should be just enough for video streaming.

> > With a properly tuned kernel, 2GHz works pretty well for capture too
> > (kino/dvgrab). I have good kernel config files we can use with 2.6.12
> > kernels.
> 
> Arg, next problem^Wissue :-) The config is not enough, we need .deb
> packages :-) Can you build them (until july 1st) or can you give us your
> configs ?
> 
> Is 2.6.12 without issues on sarge ? (think udev, devfs, whatever...)

I've used the same config with vanilla Linux kernels, and it's worked
A-OK on Sarge & Hoary. BTW, did a bit of playing around with Kino on
Sarge, and given the right kernel, it works great.

I'll get those kernel packages built for Sarge, as we have a good
Sarge kernel build environment over here @ Catalyst.

> We'll definitly need to update/create an overview of our setup.

I think we'll figure out some parts when we get there ;) 

> And please remember: our first and most important goal is not live streaming.
> It's a nice option.

Agreed. What I find tempting about Flumotion is to have the files
already done at the end of the day.

> We also need some video editing solution/tools to be able to cut/mix the (pdf)
> slides into the stream^Wrecording - I dont think html-slides are good for
> this, what do you think ?

I am still unconvinced about this. I can't picture any way of editing
it that works well for me as an end-user. The problems I see are:

 + Video compression is awfully inefficient to transmit slides.
Presentation slides will look bad (as video codecs eat awat diagrams,
medium/small writing, etc) and potentially be huge.

 + To avoid the presentation-as-a-video becoming huge, we have to
provide it as a separate file, so that the codec compresses
one-keyframe per slide-change. If we have it as a separate video file,
we'll have all sorts of sync issues at the client end: playing
"speaker" video and "slides" video at the same time won't be in sync.

 + So we make a an double-width video for better user experience and
perfect sync, with slides and speaker side-to-side. However, the
filesize will be huge, and the quality low, because the codec will be
very confused with the screen split. Movement on the "speaker" video
will trigger full keyframes on the whole image. Slide change will
trigger full keyframes on the whole image. Color-balance will probably
be messed up. And the quality of image on the slides will be _low_.

Good codecs try to be smart about what's on screen. By having a split
screen with radically different types of images we prevent the codec
from doing a good job.

The guys doing the "speakers boot camp" seemed to be happy with the
concept of asking the speakers to provide verbal cues ("and on the
next slide..."), which will also be useful if we do a vnc capture.

And, as a user, it gives me freedom to print the slides, or read them
with good quality on-screen and generally move around them as I see
fit. No text ever comes through right on compressed-for-web video.

[Sorry guys about the rant -- if someone has a better plan of how the
"splice slides into the video" things would work, and it makes sense,
please post it -- I'll be happy to be proven wrong on this count.]

> > drive-- but we shouldn't need to use the drives sneakernet style. The
> > laptops have network, right?
> 
> yes.

Ok, a few failsafe shell scripts around rsync will do the trick then ;)

cheers,

martin


--
To unsubscribe, send mail to debconf5-video-unsubscribe@lists.debconf.org.


Reply to: