Re: OpenGL and clusters?
>> Adam C Powell IV <email@example.com> writes:
> Well, okay, not "hard to imagine", but you're probably talking > 100
> processors; assembling images is very cheap.
I made that mistake. It's not as easy as it sounds. Compositing has
to be done in a consistent order and the algorithms I came up with eat
network bandwidth like candy. The Standford people developed a
hardware compositing solution based on DVI which should work nicely.
Look for "Lightning 2" (or pick up the WireGL paper and the references
therein). Software compositing does work, but you'll find some nice
upper limits derived from the available network bandwidth. An IBR
solution is what I'm looking at atm.
> OTOH, if 30 processors with GLX clients are sending stuff to the head
> node for that one CPU to render, that would be quite the bottleneck.
> But it seems like it should be possible to make a generalized render
> farm app which breaks the image into pieces, assigning one CPU to
> each piece, sends GL commands to the right CPU, renders
> distributedly, and assembles the pieces at the head node...
There are two major approaches to this: object based partitioning and
image based partitioning. OBP scales nicely wtr data-size but is
horrible wrt to network bandwidth. IBP doesn't scale easily wrt to
data (but a coworker and I were discussing some ways to attack that
just yesterday) but is nicer on the network.
> Might scale decently to four or eight CPUs or so...
That's my experience, yes.
> But it sounds like "WireGL" and "Chromium" might take this approach.
WireGL is a N-to-M architecture. TBH I haven't fully understood how
the N-to-1 case scales. The reports I've read lean towards the 1-to-M
> Interesting. Would you draw in the local node's video card, than
> capture the frame(buffer?), and send that to the head for
Marcelo | "Bingeley bingeley beep!"
firstname.lastname@example.org | -- (Terry Pratchett, Feet of Clay)