[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OpenGL and clusters?



Eray Ozkural (exa) wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tuesday 04 December 2001 17:43, Jonathan D. Proulx wrote:

Hi,

right off the bat let me say I don't grok GL...

But I have a number of users developing interactive visualization apps
that are heavy on GL, mostly medical imaging stuff.

Anyone have experience or pointers on using a cluster with this type
of app?

Not as yet, no. Illuminator (was PETScGraphics) 0.3 will have this capability, when I get around to doing it. The plan is to have each node use something like evas, or perhaps imlib2 (since it's not X-display-dependent), to render local data into an image with transparency, then send the images to the head node for simple layering.

Seems like a big looser in my current config.  WireGL and Chromium
look interesting but almost exclusively targeted at tiled displays
with some memory leaky code geared toward distributed rendering to a
single display.

There shouldn't be an out-of-the-box solution to turn serial OpenGL vis. code to parallel visualization code.

You don't get performance unless you do every step of your processing in parallel. Means parallelizing one step (and probably in a rough way as suggested by a "tiled display") won't help.

Actually, if the display merely needs to put together a bunch of images, even layering them, it's hard to imagine a large enough cluster to work decently. (Well, okay, not "hard to imagine", but you're probably talking > 100 processors; assembling images is very cheap.) This is how Illuminator will do this.

OTOH, if 30 processors with GLX clients are sending stuff to the head node for that one CPU to render, that would be quite the bottleneck.

But it seems like it should be possible to make a generalized render farm app which breaks the image into pieces, assigning one CPU to each piece, sends GL commands to the right CPU, renders distributedly, and assembles the pieces at the head node... Might scale decently to four or eight CPUs or so... Not that I know of such a thing... But it sounds like "WireGL" and "Chromium" might take this approach.

Is what I want possible or is OpenGL inherently limmited by the
capabilities of the machine the display is connected to?

No hardware accel == no performance for OpenGL.

Typically you don't put gfx h/w on a beowulf node. But if you do, you can take advantage of that with a parallel algorithm.

Interesting. Would you draw in the local node's video card, than capture the frame(buffer?), and send that to the head for assembly/layering?

Zeen,
--

-Adam P.

GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Welcome to the best software in the world today cafe! <http://lyre.mit.edu/%7Epowell/The_Best_Stuff_In_The_World_Today_Cafe.ogg>





Reply to: