[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OpenGL and clusters?



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Adam,

On Wednesday 05 December 2001 04:44, Adam C Powell IV wrote:
>
> Not as yet, no.  Illuminator (was PETScGraphics) 0.3 will have this
> capability, when I get around to doing it.  The plan is to have each
> node use something like evas, or perhaps imlib2 (since it's not
> X-display-dependent), to render local data into an image with
> transparency, then send the images to the head node for simple layering.
>

That sounds really good. It'd be great if each node could compute local data. 
For distributing the data, you'd be using PETSc, and the visualization 
routines won't need any communication except the final image composition 
which is cheap.

> >There shouldn't be an out-of-the-box solution to turn serial OpenGL vis.
> > code to parallel visualization code.
> >
> >You don't get performance unless you do every step of your processing in
> >parallel. Means parallelizing one step (and probably in a rough way as
> >suggested by a "tiled display") won't help.
>
> Actually, if the display merely needs to put together a bunch of images,
> even layering them, it's hard to imagine a large enough cluster to work
> decently.  (Well, okay, not "hard to imagine", but you're probably
> talking > 100 processors; assembling images is very cheap.)  This is how
> Illuminator will do this.
>

I think image space composition will work if you can get the load balance and 
communication volume right. I didn't really mean >100 processors. It would be 
difficult enough to do it >8 processors.

What I meant was, if you have a random OpenGL serial code that visualizes 
some dataset you can't parallelize that unless you distribute the data, 
localize certain computations, etc.

> OTOH, if 30 processors with GLX clients are sending stuff to the head
> node for that one CPU to render, that would be quite the bottleneck.
>

That would be a bottleneck. But it depends on the application. If the number 
of polygons is not huge (some sort of multi-resolution optimization done at 
each client node) and if prior computation dominates running time (CFD, etc.) 
then it might be okay. But I imagine it wouldn't be very scalable.

> But it seems like it should be possible to make a generalized render
> farm app which breaks the image into pieces, assigning one CPU to each
> piece, sends GL commands to the right CPU, renders distributedly, and
> assembles the pieces at the head node...  Might scale decently to four
> or eight CPUs or so...  Not that I know of such a thing...  But it
> sounds like "WireGL" and "Chromium" might take this approach.
>

Yes, certainly. Taking one thing into account, this should be feasible. The 
code should be a parallel visualization framework with hooks for computing 
the actual image distribution so that the user can tune his application. You 
can't just do a row-wise or checkerboard block partitioning or a cylic 
partitioning. Though, it would be wise to provide for common partitioning 
schemes from which the user could select or override.

> >>Is what I want possible or is OpenGL inherently limmited by the
> >>capabilities of the machine the display is connected to?
> >
> >No hardware accel == no performance for OpenGL.
> >
> >Typically you don't put gfx h/w on a beowulf node. But if you do, you can
> >take advantage of that with a parallel algorithm.
>
> Interesting.  Would you draw in the local node's video card, than
> capture the frame(buffer?), and send that to the head for
> assembly/layering?

I had this idea for a while but of course I don't have the hardware for 
trying it. The people who do parallel visualization, like you intend to do, 
implement their own rendering routines. Using a software OpenGL 
implementation would be prohibitively slow. But if you have hardware OpenGL 
at each node, you can then simply render into a buffer and parallel reduce 
composite the images for display at the interface node. It might be 
worthwhile, since you could do very complex renderings.

I haven't written OpenGL code for ages, but I guess there is a routine to 
render to such a buffer. Are there OpenGL man pages in Debian?

Cheers,

- -- 
Eray Ozkural (exa) <erayo@cs.bilkent.edu.tr>
Comp. Sci. Dept., Bilkent University, Ankara
www: http://www.cs.bilkent.edu.tr/~erayo
GPG public key fingerprint: 360C 852F 88B0 A745 F31B  EA0F 7C07 AE16 874D 539C
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE8Dh6kfAeuFodNU5wRAsigAKCOOYma1mevFPatDB1P38CGIN+AtQCgg2SI
17U1NQO/fj6NV3YopVYMd5U=
=bO9f
-----END PGP SIGNATURE-----



Reply to: