Re: Cross-distro binary compatibilty
Adam Stiles wrote:
> As I've said before, binary compatibility is irrelevant.
No, it isn't. If it were, then why does everyone spend so much time
worrying about? And I do literally mean everyone, both within and
beyond the OSS community.
In fact, from a
> security point of view, binary _in_compatibility -- to the point where
> binaries compiled on one box would not run on any other box -- might be
> desirable, then there could never be such a thing as a virus.
This kind of nieve thinking doesn't help you.
The Apache slapper worm attacked a vulnerability in OpenSSL, a library
notorious for it's ABI-incompatilibites and managed to sucessfully
trojan a entire rash of boxes.
So no, there will still be plently of viruses.
> some sort of compatibility mode would be required for initial bootstrapping
> of a system, but access could be restricted by means of something like a
> motherboard jumper, that could not be defeated by software alone.}
And just how is that going to modify the binary code provided on a CD?
> compatibility is all that really matters, and there are enough examples
> around to show that this is entirely achievable.
Yet it's not as interesting as ABI compatbility. API compatibility is
pretty easy to maintain. ABI is not. ABI is also more desirable.
ABI compatability is how KDE code from 3.0 can still run on 3.4 today.
That's a desirable thing, because it means I don't have to keep my
application up with the KDE developers if I don't want to.
ABI compatability is how programs written from kernel 2.0-era can still
run on a 2.6 kernel today, without any changes. That's also desirable.
And I'm not just talking
> about compatibility across different versions of the same distro, or even
> different Linux distributions; but Linux, the BSDs, Solaris and legacy
> systems too.
Then you don't know what you talk about. No UNIX is 100% API compatable
with any other UNIX or UNIX-alike, even along the lines of SUS and POSIX.
If you don't believe me, go look at the code for tcpdump. Or at the
Apache Portable Runtime.
> The only reason why you would ever want to be able to run a binary not
> compiled by you is if you did not have the source code; and if you don't have
> the source code, it's probably because someone doesn't want you to have it.
That's not true. Stop being so simple-minded.
What if I have 2000 webservers? Why should I take the time to compile
source on each one? Or take the time to compile the source on one box
and /distribute/ the binaries? Especially when we have groups of people
who do that, who produce products called Linux distributions.
And odds are, for any given piece of software, they're going to a better
job of packaging it up and resolving issues than any individual person
can. Doing things like more widespread testing, keeping abreast of
development issues and bugs, providing patches, and the like.
Something most people don't have time to do. Certainly something that's
just not acceptable in a production environment, where downtime == money
> If somebody doesn't want you to have the source code, then there is probably
> something in it that they are ashamed to show you.
That's simply not the case. What if the source code contains a nifty
algorithm I don't want anyone to have? What if it contains
trade-secrets that make my business process really efficent?
There are plenty of valid reasons to not want someone to have access to
Please, go read some books on software engineering, configuration
management, and quality assurance before spouting anymore of what
amounts to simple-minded nonsense. You clearly haven't considered even
remotely enough cases to give any of your suggestions merit.