[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: gcc 3.2 transition in unstable

On Wed, Jan 08, 2003 at 12:57:45PM -0600, Steve Langasek wrote:
> On Wed, Jan 08, 2003 at 01:28:58PM -0500, H. S. Teoh wrote:
> > Umm... if I were an upstream author, I'd choose the soname based on the
> > API.
> Then you're not fit to be an upstream library author, because you don't
> understand that an ABI can change *as a result of changes you make*,
> without any corresponding changes to the API.  One example of this is
> changing the size of a data type, which may provide complete source
> compatibility while completely breaking binary compatibility.

In my mind, such changes *are* API changes. Perhaps I'm a bit too C/C++
centric, but to me, an API change = when you need to recompile code that
uses the library after the change. Size changes fall under that category
(since they are visible through the .h file).

> Moreover, encoding API information into a binary object (which, being
> binary, exports only an ABI -- not an API) is worthless.
> The soname should uniquely identify the ABI, to the extent the ABI is
> under upstream's control.  We *should* have a mechanism for handling ABI
> incompatibilities beyond upstream's control, but that's not going to be
> fixed here.

OK, then I'm using the wrong terminology. I'm equating .h files with
"API", but they actually also encode part of the ABI (in such things as
data type sizes--the programmer who uses the library may not care about
such things, but the compiler sure does, which is why a recompile is
necessary when a type size changes). 

So to rephrase my original proposal with the right terminology: the
soname, which describes the ABI, consists of two parts, one uniquely
defined by the upstream source, and the other defined by the compiler.
(I'm thinking about size changes vs. function-call register usage
conventions here.)  Upstream can only control the first part: the parts of
the ABI uniquely defined by the upstream library source. The compiler
determines the other part: the machine-dependent parameters, the calling
conventions, the implementation details of C++ classes, etc.. 

The present problem lies, as I understand it, in the fact that currently,
the soname encodes only the first part; it currently does not distinguish
between binaries that differ only in the second part. Since this second
part is compiler-dependent, appending a compiler-unique string to the
soname *at compile-time* would correctly encode this information and
prevent this kind of breakage. (Of course, strictly speaking, the second
part is also architecture-dependent; but I'm assuming we're not expecting
.so's for a foreign architecture to be present in ldso's search path, so I
see no need to encode the architecture in the soname as well.) 

Of course, I realize that such a scheme is not likely to be implemented
right now. This changing of sonames by the compiler can only be done if
widely supported across distros, otherwise we'll just be completely
binary-incompatible. But eventually, it has to be addressed, otherwise it
will remain a perpetual problem. So it's something we should aim for.


LINUX = Lousy Interface for Nefarious Unix Xenophobes.

Attachment: pgptpe_5CCTgL.pgp
Description: PGP signature

Reply to: