Re: Desktop normalization
Marcin Krol wrote:
[ snip modularity vs. one model ...]
> Yes, they are expected. Problem is, they don't adapt - user
> has to do it. In order to do so, user has to acquire large quantities
> of intrinsic and otherwise useless knowledge. It's not
> impossible - it's uneconomic. IMHO, the whole point of standards
> is to make things economic. If you need customization only, you
> do not need standard, just DIY everywhere.
Agreed. I see a few possible approaches to solve the problem:
1) provide a standard set of services application writers can use, i.e.
a single fully specified API (Macintosh or Windows; Linux might still
have different implementations but their compatibility might be an
2) provide a set of services application writers can use to discover at
launch time what services are available, so that applications can make
use of what they recognize;
1+2) provide a standard subset which includes common services and the
means to find out what extra services are there;
4) provide a configurator like linuxconf, which knows everything about
the configuration needs of every installed package (in principle) and
does all the necessary work.
If the push for desktop standardization is for (1), it seems to me that
we already have one, Motif (I can't say about CDE). Yet, since it was
possible to have alternatives, people wrote alternatives (whatever the
reason: bloat, poor design decisions) which attracted application
developers (sort of) and now we have applications which are not for
Besides, I've read that the Lesstif developers found out that they had
to duplicate a number of quirks and ugly hacks for the sake of binary
compatibility. In my opinion, it is important to prevent binary
compatibility from becoming a burden, a goal which I think could be
achieved through a separation of APIs: a binary compatibility API for
applications which want binary compatibility, and the real API which
should be free to evolve. Otherwise, as it happened before in the Unix
world, someone will come out with a better API (or portion thereof): not
just a better implementation of an existing API, but an approach that
breaks binary compatibility for very sound and compelling reasons, sound
and compelling enough to attract consensus. Hence fracture.
Writing another (1) would be definitely non-trivial, it seems to me, and
besides is exactly what Gnome and KDE are already attempting to achieve.
Writing a proxy (1), an API which abstracts existing desktops, might
prove very challenging from the design point of view (meaning that it
might have to change many times before it gets right, the best way to
scare application developers away).
I must say I am attracted by the notion of (2), mainly because I have
seen it suggested on Usenet but found otherwise very little about such
an approach. The problem with (4), of course, is that writing the
configuration program becomes a monumental task and keeping it up to
date even more so (although XML might definitely help with this): the
approach does not seem to scale well.
[ snip ]
> LSB has to describe desktop *behaviour* in some way for the sake of
> compatibility between distributions tailored for desktop machines.
> Otherwise you get anarchy on the level of UI, and that is what new users
> hate most.
Does this mean describe "left click should be select, middle click
extend, right click paste" or does this mean "call CloseWindowEx() to
close window" ?
> 2. Porting/writing for Linux.
> Put yourself in shoes of software vendor that wants to port
> popular application that uses GUI to Linux.
> If you don't have the way to make it easily (and cheaply) integrate into
> all possible GUI styles (fvm95 windows-like start menu, KDE way, etc.),
> then this vendor has instantly limited out-of-box usability of
> application. Theoretically this vendor can write installation scripts for
> all possible wms, but vendor does not want to do it (besides it has little
> practical chance to work 100% correctly), so he says "screw it, let end
> user integrate it". Which end user hates to do, because time spent on
> configuring machine is from user's POV wasted.
The point (2) above might boil down to writing an interface against
which installation scripts could be written, except that instead of
doing it at install time you do it at launch time - because in a system
where components are updated separately, you might need to redo the
installation if a component you depend on is updated, so you have to
introduce trigger scripts and so on. Doing it at launch time might be
better, caching the results in a dot file for performance if necessary.
#include <disclaimer.h> // Standard disclaimer applies
-----BEGIN GEEK CODE BLOCK-----
GE/IT d+ s:+ a C+++$ UL++++$ P>++ L++@ E@ W+ N++@ o? K? w O- M+ V?
PS PE@ V+ PGP>+ t++ 5? X R+ tv- b+++ DI? D G e+++ h r y?
------END GEEK CODE BLOCK------