[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#700517: marked as done (RFP: turbovnc -- run remote OpenGL applications with full 3D acceleration)



Your message dated Mon, 06 Aug 2018 04:19:58 +0000
with message-id <E1fmWzu-0006ll-Sy@quantz.debian.org>
and subject line closing RFP: turbovnc -- run remote OpenGL applications with full 3D acceleration
has caused the Debian Bug report #700517,
regarding RFP: turbovnc -- run remote OpenGL applications with full 3D acceleration
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
700517: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=700517
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: wnpp
Severity: wishlist

* Package name    : turbovnc
  Version         : 1.1
  Upstream Author : Darrell Commander  
* URL             : http://www.virtualgl.org 
* License         : GPL 2 
  Programming Lang: C++ 
  Description     : VirtualGL is an open source package that gives any Unix or
  Linux remote display software the ability to run OpenGL applications with full
  3D hardware acceleration.

VirtualGL is an open source package that gives any Unix or Linux remote display
software the ability to run OpenGL applications with full 3D hardware
acceleration. Some remote display software lacks the ability to run OpenGL
applications at all. Other remote display software forces OpenGL applications to
use a slow software-only OpenGL renderer, to the detriment of performance as
well as compatibility. The traditional method of displaying OpenGL applications
to a remote X server (indirect rendering) supports 3D hardware acceleration, but
this approach causes all of the OpenGL commands and 3D data to be sent over the
network to be rendered on the client machine. This is not a tenable proposition
unless the data is relatively small and static, unless the network is very fast,
and unless the OpenGL application is specifically tuned for a remote X-Windows
environment.

With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D
graphics accelerator on the application server, and only the rendered 3D images
are sent to the client machine. VirtualGL thus "virtualizes" 3D graphics
hardware, allowing it to be co-located in the "cold room" with compute and
storage resources. VirtualGL also allows 3D graphics hardware to be shared among
multiple users, and it provides "workstation-like" levels of performance on even
the most modest of networks. This makes it possible for large, noisy, hot 3D
workstations to be replaced with laptops or even thinner clients. More
importantly, however, VirtualGL eliminates the workstation and the network as
barriers to data size. Users can now visualize huge amounts of data in real time
without needing to copy any of the data over the network or sit in front of the
machine that is rendering the data.

Normally, a Unix OpenGL application would send all of its drawing commands and
data, both 2D and 3D, to an X-Windows server, which may be located across the
network from the application server. VirtualGL, however, employs a technique
called "split rendering" to force the 3D commands from the application to go to
a 3D graphics card in the application server. VGL accomplishes this by
pre-loading a dynamic shared object (DSO) into the OpenGL application at run
time. This DSO intercepts a handful of GLX, OpenGL, and X11 commands necessary
to perform split rendering. Whenever a window is created by the application,
VirtualGL creates a corresponding 3D pixel buffer ("Pbuffer") on a 3D graphics
card in the application server. Whenever the application requests that an OpenGL
rendering context be created for the window, VirtualGL intercepts the request
and creates the context on the corresponding Pbuffer instead. Whenever the
application swaps or flushes the drawing buffer to indicate that it has finished
rendering a frame, VirtualGL reads back the Pbuffer and sends the rendered 3D
image to the client.

The beauty of this approach is its non-intrusiveness. VirtualGL monitors a few
X11 commands and events to determine when windows have been resized, etc., but
it does not interfere in any way with the delivery of 2D X11 commands to the X
server. For the most part, VGL does not interfere with the delivery of OpenGL
commands to the graphics card, either (there are some exceptions, such as its
handling of color index rendering.) VGL merely forces the OpenGL commands to be
delivered to a server-side graphics card rather than a client-side graphics
card. Once the OpenGL rendering context has been established in a server-side
Pbuffer, everything (including esoteric OpenGL extensions, fragment/vertex
programs, etc.) should "just work." If an application runs locally on a 3D
server/workstation, then that same application should run remotely from that
same machine using VirtualGL. 

--- End Message ---
--- Begin Message ---
RFP 700517 has no visible progress for a long time, so closing.

--- End Message ---

Reply to: