[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: weird x error?



> this probably isn't a debian specific error, but i think it might have
> something to do with how i have the system set up.  i'm logging in
> remotely to an SGI IRIX machine and attempting to run an X
> application.  it fails with something like the following error:
> 
>   Error initializing colors for colormap

I could probably help more if you were running Accelerated-X, as I know
that X Server rather better than XFree86... but I have some wild
guesses.

Does the application require overlays?  Some workstation applications
expect to use both 24 bit and 8 bit windows.  XFree86 does not support
this, and very few PC graphics cards have the hardware to make this
work.

Some workstations, SGI boxes especially, have multiple color look up
tables.  This more usually has an effect when running in 8bpp, but it is
possible that the unnamed application is trying to access mutually
independent color look up tables.  Pc Graphics hardware can not support
more than a single 8 bit color look up table (which is why SGI boxes and
some other workstations have less colormap flashing).  There are a few
chips that would help, which could be used on a PC graphics board, but
they are expensive and used only by specialist board makers for unusual
market segments.

Are you sure that the application is not attempting to use SGI's DGL or
SGI OpenGL's GLX?  Depending upon where the error message is being
generated, this might be the cause.

Are you sure that the application requires 24 bit color?  When running
the application on the SGI box, displaying to the SGI display, use
'xwininfo' to find the color depth(s) of the applications windows.  A
message about running out of colors in a colormap suggests that the
application is assuming the presence of a programmable colormap.  This
is normally only true in lower color depths (e.g. you get 8bpp
PseudoColor Visuals, but 24bpp Pseudocolor Visuals are unknown as they'd
need 48MB of high speed color look up table memory for the DAC; when
most graphics board vendors are happy with 16MB of display memory, that
would be rather unusual and expensive).

> i'm running in 32 bit color depth, and no color intensive processes
> are being displayed.  (don't tell me that sgi has invented 64-bit
> color...right?)  what does this mean?

You are not actually running in 32 bit color depth.  You are running in
24 bit color depth and using 32 bits to do it.  8 bits are being
wasted.  There are some real 32 bit color depth representations, such as
using "RGBA" formats, but PC's don't do those.

The nomenclature problem arises because Intel processors are
particularly sensitive to word alignment.  If you read a word and it is
not aligned with a word boundary then you significantly decrease
access.  This is also trus to a greater or lesser extent for other
processors, too.

The color hardware uses 8 bits for each of the red, Green and Blue color
guns in the tube.  You therefore need 24 bits of color data.  This would
result in two in three data accesses being misaligned, if data were
represented in system memory in contiguous 24 bit chunks.  The
consequence is an approximate 15% speed penalty.  By "wasting" one byte
for each pixel and aligning on a word boundary the speed penalty is
removed, at the cost of an approximate 25% memory increase for each
dynamically allocated graphical representation.

However, it is possible to use 32 bits to store the 24 bit in system
memory (accessible directly o the application) but to represent the data
in packed form on the system bus and in the graphics board. 
Accelerated-X does this, which is one reason why our packed pixel
representation does not cause application problems, but we still get the
full 25% speed up that you'd expect from the better bandwidth
utilisation.

For some reason, XFree86 appear to have done the right thing in some
variants of their Servers, but seem to have lost or forgotten the
knowledge in the XFree86_SVGA Server for the Matrox Millennnium type
boards.  Although the XFree86 Server gains the 25% bus speed up, it then
loses about 15% in systm memory access times, for a total speed up of
only 10% or so.  This is part of the reason that their Servers benchmark
at about 50% of our speed.   

The real problem that using a 24 bit system representation causes is
that application developers do not expect this format.  Workstation
developers use a 32 but system memory representation.  All previous X
Servers, including earlier XFree86 Server, use 32 bit system
representation, instead of the bizarre form introduced last year by
XFree86.  Since it is a largely untestable format, as well as being
rather unwise for maximum speed, I expect that quite a few applications
will have unexpected problems.  This is *not* a consequence of 24 bit
packed pixel operation, but a consequence of presenting 24 bit packed
pixels to the application.  You could equally easily cause the same
problem by using 24 bit packed representation to applications and then
unpacking the data for the bus transfer and in the graphics board
memory.

Anyway, workstations represent their 24 bit color data data using 32
bits.  It is only PC's, only since 1994, that use packed pixel
representation.  So if you use a 32 bit system representation with
XFree86, you are less likely to trigger application problems than if you
use 24 bit system representation.

-- 
Jeremy Chatfield, Xi Graphics  mailto:jdc@xig.com  tel:+44(0)1234.710030 
 Commercial X Products: Servers, CDE, contracts and custom development
    http://www.xig.com ftp://ftp.xig.com/ mailto:majordomo@xig.com
     tel:+1.303.298.7478  fax:+1.303.298.1406  mailto:info@xig.com

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


Reply to: