Re: itp: static bins / resolving static debian issues
I think if you had the following available, you could do a lot:
-- sash (includes many common commands as builtins)
-- e2fsck, fdisk, mount (repair, create, and mount, incl nfs mount)
-- ar, gzip, tar (unpack and install stuff, possible copied from nfs)
-- su, sulogin (ensure that you can get to a root shell)
-- restore (i think this one is obvious)
These are not commonly used utilities, and it's really not that long
a list of programs to maintain. No doubt there are a few more that
are also needed as well, but I don't believe the list needs to
grow to be the entire /bin directory.
I do prefer having a static /bin directory, but I accept that the GNU
versions of the unix utilities are a lot bigger than BSD's, so it may
not be quite as attractive under a GNU OS.
As for security updates, people have said this numerous times, but I
always believed that Debian's big advantage was the intelligence of
its package manager. Are you really claiming that Debian's package
manager wouldn't be able to install fixes for all of these programs
in one sweep?
It's not that there are N new dependencies that are security sensitive,
it's that there is ONE more dependency: you have to upgrade your library,
plus now you have to upgrade the static tools package (one package).
Debian's package manager can certainly cope; and I believe that developers
with a standard Makefile can cope fairly easily as well. So yes, there
is a small issue here, and a small risk, but it is small.
I've said many times what I think the real benefits of static linking
are; and I do have lots of machines where I prefer to keep the machine
up and fix a problem live rather than take the Microsoft solution to
any problem and reboot. Admittedly my attitude is probably 7/10ths pride,
but it is 3/10ths real need. And those real needs do come up periodically,
probably not so much on my own systems, but definately on my clients. And
I do have a few remote servers that I can get to only with difficulty.
As for correcting the inaccuracies, thanks. That the webserver can
survive a C library failure is, in my view, an argument for live
recovery as opposed to a reboot.
On Thu, Aug 19, 1999 at 02:39:27PM -0400, Greg Stark wrote:
> Justin Wells <firstname.lastname@example.org> writes:
> > Perhaps a few binaries, such as e2fsck, restore, and fdisk, should
> > be static. If you selectively pick just those that would really be
> > important, and would never be used by ordinary users, there is very
> > little cost and very real gain.
> This has all been argued before many times. I was even on your side at the
> time, but the bottom line is that there would be basically nothing to gain and
> real risks.
> You would have to have a fairly large set of basically arbitrary binaries
> statically linked to be able to do anything useful. The ones you named would
> be useless without a half-dozen or so more. If you wanted to do this you could
> make separate packages that installed them in /sbin or something like that.
> NetBSD for example ships all of /bin and /sbin static. Anything short of that
> and you're better off just going to a boot disk.
> There _are_ real costs however in the form of security holes in libraries.
> Normally on a dynamically linked system you can upgrade a shared library to
> one that fixes a security hole and know that every application is now fixed.
> If there are random statically linked binaries lying around linked against old
> versions of libraries you can still have problems.
> Incidentally there have been a few inaccuracies posted earlier. Apache would
> continue functioning fine since fork(2) doesn't trigger any dynamic linking,
> only exec. And the argument about data integrity is off-base, that might be
> your priority or it might not. There are plenty of front-line web servers that
> don't need to worry about data integrity over uptime.
> To UNSUBSCRIBE, email to email@example.com
> with a subject of "unsubscribe". Trouble? Contact firstname.lastname@example.org