Re: Packaging stuff
From: "Anthony W. Youngman" <Anthony.Youngman@ECA-International.com>
Date: Wed, 25 Oct 2000 12:36:08 +0100
AIM: The aim of this section of the LSB is to ensure that a package is
guaranteed to INSTALL AND RUN on an LSB-compliant system. To that end we
have two orthogonal problems, getting the package onto the system, and then
to install that package such that it will run successfully.
PREMISE: I hereby argue that the rpm file format is the wrong way of
achieving either objective. The problem/confusion is caused because the
existing LSB documentation says "we will use a suitable subset of rpm", thus
implying that we are tackling both problems at once.
PROOF 1: To achieve the first objective, placing the files onto the system
in question, we do not need any dependency information. To this end we
should strip all dependency information from the rpm. This reduces it (as
far as I can see) to a pure cpio archive, so why on earth aren't we using
cpio (or tgz)?
Customers do want to be able to easily *remove* and *upgrade* packages.
cpio and tgz doesn't provide this capability. Even Windows users have
this capability, and compared to .rpm or .deb's, or Windows 2000
packages, cpio or tgz are a very sad, out-dated technology.
PROOF 2: To achieve the second objective, we need dependency info, and a
package management database. The LSB *studiously* *avoids* trying to address
this problem, thereby making it impossible to guarantee that a program will
install successfully and run. The format of the archive file that is
delivered onto the system is irrelevant to this problem, so rpm is
irrelevant. And if you specify dependency information in the rpm the LSB is
laying down exactly the restrictions it is trying to avoid.
The scope of LSB is specifically for third party manufacturers. I.e.,
what Loki needs to ship Quake, or Intuit needs to ship TurboTax for
Yes, this is a restrictive scope. But Nick (and others) have criticized
the LSB effort for taking so long. Trying to expand the scope, and
revisiting decisions which were already made, is not a method calculated
for shorting the time period before LSB 1.0 can get out.
Since this is *all* we're trying to do, and we can affect what the
application vendors do when they create their package, all we need to do
is tell the application vendors what they can depend on, at which point
the only package dependency is needed is one for "LSB 1.0".
Furthermore, since it's obvious many of the people who are ranting on
this list haven't taken the time to read the LSB draft specification,
I should point out that LSB applications are supposed to use
LSB-specific .so names. This allows either the distribution, or a
third-party provider, to make a "LSB-compatibility" .rpm or .deb file
which can be applied to make an existing distribution LSB conformant and
able to accept LSB applications.
I respectfully argue that the specification (as at when I last looked) is
internally inconsistent, doesn't work (as Nick pointed out with ncurses),
and should be completely rethought.
For the last time, the LDPS, which Nick was criticizing, is NOT the
LSB. Those represent completely different approaches. The LDPS is a
short-term solution, which is all about giving suggestions to ISV's for
how to build applications that work on most distributions, without
requiring any changes on the distributions. This is important, since
even if LSB were ready tomorrow, it takes a good line time before
distributions would be able to make changes, and even longer before the
deployed base upgrades to the latest distribution version. (Some people
will be running Red Hat 6.2 for a long time, remember.)
The LSB is a medium-term solution. As such, it requires that the
application providers link against specific LSB libraries, and it
requires that LSB libraries (and possibly compatibility symlinks) be
installed on the distribution. This is quite different from the LDPS.
Finally, if we want LSB to be released sometime this decade, we have to
use existing technologies. Having folks talk about vaporware, using
terms like "protocol" where they refuse to be pinned exactly how this
would be implemented, or even what language this would be implemented
in, isn't particularly useful.
The open source mantra is "show me the code". If we have a working
prototype, then we can consider whether it's worth mandating it. But
given that it even hasn't been implemented yet, and given that it's
likely to potentially take a year or more to implement it, should we sit
on our hands and stop all progress while this theoretical solution is
What might make more sense is for those people who are clamoring for
such a solution to actually design and build it, and then get back to