Hi, Am Dienstag, den 05.02.2013, 17:03 +0100 schrieb Adam Borowski: > On Tue, Feb 05, 2013 at 04:36:44PM +0100, Joachim Breitner wrote: > It's not a matter of "a little infrastructural complication", it's about > having the slightest chance of reasonable security support -- or even > regular bug fixes, when multiple layers of libraries are involved. > > If there is a bug in library A, if you use static linking, you need to > rebuild every single library B that uses A, then rebuild every C that uses > B, then finally every single package in the archive that uses any of these > libraries. > > Just imagine what would happen if libc6 would be statically linked, and a > security bug happens inside (like, in the stub resolver). Rebuilding the > world on every update might be viable for a simple scientific task[1], but > not at all for a distribution. why not? I agree that it is not desirable, but it is possible, and if it it were possible easily (e.g. with little human interaction), this would be an indicate of a very good infrastructure. And in fact we have it: binNMUs and buildds enable us to rebuild large parts¹ of the distribution after such a change (otherwise, maintaining Haskell wouldn’t be feasible). > Static linking also massively increases memory and disk use; this has > obvious performance effects if there's more than one executable running > on the system. True, static linking has its disadvantages. But this is unrelated to the problem of supporting languages with often-changing ABIs. Greetings, Joachim ¹ I do it regularly for more than 2.5% of all our source packages, and – besides machine power – I see no reason why this would be impossible for larger fractions of the archive. -- Joachim "nomeata" Breitner Debian Developer nomeata@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C JID: nomeata@joachim-breitner.de | http://people.debian.org/~nomeata
Attachment:
signature.asc
Description: This is a digitally signed message part