[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Distributed visualization anyone?



Adam C Powell IV wrote:
> 
> "Eray Ozkural (exa)" wrote:
> 
> > About unstructured grids: I think you can't do that with PetSC because
> > PetSC has no idea of sparsity.
> 
> Hmm, I'm not quite sure what you mean by "sparsity".  PETSc has some very nice
> distributed sparse matrix solvers, though it really doesn't have an object which
> is dynamically resizable on each node, which would be helpful for an
> unstructured grid.  This is one reason I haven't done the communications part
> yet, that is, sending all of the triangle data to node 0.
> 

Well, what I meant by sparsity is that you would need a distributed
data structure for computing the workload and communication volume required
by the distribution of the irregular grid on the processors. ( I know it
sounds kind of twisted )

That is, the distributed data structure (which would be a graph or a
hypergraph) has sufficient information to compute what a given data
distribution (and replication) would result in.

The objectives are naturally load balancing and minimization of communication
volume. Minimization of total comms. volume is a more important objective
than load balancing in this (it seems)

> On the other hand, one can create the triangle data on each node, then create a
> vector with the different sizes on each node to handle the data, send everything
> to node 0, and destroy the vector.  It's inelegant, but should work.  I think
> I'll try that.  I'll try to put up 0.1 this weekend before implementing this
> (just got some showstopper automake/PETSc issues settled), then do this for 0.2.
> 

If you do as much of the shading at the nodes and then send the results
to the interface node (beo master), this way you should be able to handle
regular grids.

> > A friend of mine did parallel volume
> > visualization on irregular grids but that requires a communication
> > volume model which is a hypergraph and a hypergraph partitioner to
> > reach the desired effect; anything else would be wasting the hardware.
> 
> You're absolutely right.  PETSc has an interface with ParMETIS for this
> purpose.  I don't know much about it or its data structures at this point, but
> plan to learn soon, and maybe package ParMETIS for Debian, and make at least
> petsc2.0.29-dev depend on it...
> 
> (Not that PETSc is the only thing out there, but I don't know of other
> Newton-Krylov solvers which scale so well, and with it I don't have to learn MPI
> or PVM. :-)
> 

Almost forgot to say: ParMETIS partitions graphs, not hypergraphs. But if you
could make a graph model for communication volume then you could use that.
I've seen some ASCI class applications do such tricks. PetSC's implementation
of parallel Newton-Krylov solvers are state-of-the-art, so it's only logical
to use it for real world projects. You know it's much better than using
JAVA for distributed computing, ha ha!! (Some guys really use Java for
some "HPC" projects, incredible!!)

> > Making that real-time for a timestepping simulation would be a great
> > challenge.
> 
> What's interesting is that the time required to do a single (semi-)implicit
> timestep can be much greater than the time required to repartition the mesh!
> See for example a writeup of a Lagrangian-Eulerian model of blood cell
> deformation in a flow field at:
> 
> http://www.cs.cmu.edu/~oghattas/papers/sc2000/sc2000.pdf
> 

Well, you should be ideally using an FM refinement over the partition
of the last interval (why am I mentioning this? might be a good idea for
a decent paper :P)

> They actually repartition during each timestep!  (Using a much more primitive
> algorithm than ParMETIS.)  But visualizing this should be pretty trivial, just
> loop through the elements to generate triangles, just as I currently loop
> through the finite difference grid as if it were made of linear hexahedral
> elements.  Unless I'm misunderstanding what your friend did...
> 

The specifics of the algorithms vary with their design. A strictly object
space vis. algorithm would be very different from an image space algorithm.
Now, if you'd just generate triangles, that would make a lot of triangles
to send to a processor to draw. That's why I think making it real-time
parallel is difficult. How will you render it? Even if you want to use
OpenGL at the display node, then you will have to do some preprocessing
on those triangles (by considering resolution, shading, etc.) before you
send them over to the display node.

That's why my friend's thesis was half-theoretical. You first partition
the hypergraph model on a single proc. with Umit's hypergraph's partitioner,
*then* you distribute the data and render. Of course, the scalability
is not that great. :)


And just out of curiosity: What will you be using this visualization for?
Because I'd tried my hand on extending Overblown to PetSC on MPI. If that's
CFD you're working on, it might be a nice place to look at because Overblown
is a really cute framework.

Thanks,

-- 
Eray (exa) Ozkural
Comp. Sci. Dept., Bilkent University, Ankara
e-mail: erayo@cs.bilkent.edu.tr
www: http://www.cs.bilkent.edu.tr/~erayo



Reply to: