[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: how to make Debian less fragile (long and philosophical)



On Mon, Aug 16, 1999 at 06:51:37AM -0400, Dale Scheetz wrote:

> > And, as I said before, dynamic linking can break anywhere, not only in an
> > unstable distro.
> 
> FUD
> 
> If what you say were true, you would be arguing that NO programs should be
> dynamicly linked. That would be stupid.

Wrong.

The vast majority of programs do not need to work during a system failure,
or in the middle of an upgrade. For example, I don't much care whether 
or not I can run X during a system failure; I don't much care whether I 
can run Netscape; it is OK for named to temporarily go down while I 
upgrade my system. 

I *do* care that I can get a root shell. I *do* care that I can install
packages. I *do* care that I can fsck my disk. I *do* care that I can
edit configuration files. I *do* care that I can partition my disk. 
I *do* care whether I can bring my network back up or not (to ftp or
nfs mount something important). 

This is not FUD. This is something that 30 years of Unix experience has
taught us that we need, and that every other decent OS provides. Look
at Solaris, SunOS, FreeBSD, NetBSD, BSDI, OSF/1, RedHat, Caldera, 
HPUX, SCO, and for that matter, I'm willing to bet NT has some 
statically linked system tools just for system repair--certainly
it's predecessor VMS did.

Debian has callously thrown away 30 years of hard won knowledge here, 
because for some reason people believe the intricate dependency manager
is a replacement for common sense.

This is similar to when the world trade center replaced basic, ordinary
emergency lights (which turn on by laws of physics when the power fails)
with a centrally controlled computer run emergency light system. That 
is why the whole building went black when the computer got blown up. 

You do NOT replace trusted, well tested, and simple precautions with 
complicated, not well tested, and fancy ones. You do not need to have
a high-performance multi-threaded dynamically linked fsck--you need
one that works reliably when you really need it most.

There is only ONE advantage in dynamic linking, and that is a performance
advantage: dynamic binaries are smaller, and load faster, and use fewer
system resources. 

Now if apt-get, fsck, dpkg, /bin/sh, ifconfig, route, ping, fsck,
mount, umount, mke2fs, dump, restore, ps, ln, and dd were somehow
performance-critical applications you might have some kind of point:
running 1000 simultaneous copies of mke2fs might be a problem if
it were statically linked. All of these binaries are very small, 
though, most only one or two hundred K at most--so even then I 
would guess your average modern machine could handle it.

Let me know when you set up an apt-get server; and when you start 
hosting a machine that allows hundreds of users to run thousands
of copies of fsck and fdisk.

The truth is that anything in /bin that is heavily used is already
a shell builtin, and the rest of it is hardly ever used outside of
system configuration and an emergency. I doubt very
much you are going to have a muti-user system with 15 people all
simultaneously running 'umount' or 'dpkg'.

But these programs are absolutely critical during a system failure, 
and are the basic bread and butter of the installer and package 
management suite. The times you need them most are precisely the 
times when dynamic linking might be failed: the C library might have
been removed and not properly re-installed; a disk failure might 
have brought down /usr but not /; a careless admin error might have
failed the link loader; a mis-typed rm might have wiped out some
critical library.

These things happen, and unix admins everywhere thank 30 years of 
common sense and good practice when the presence of a statically 
linked sh, cp, fsck, and restore allows them to recover from 
their own stupidity, or the stupidity of the package manager (which
may have bugs, even, believe it or not, in a stable release.)

Note that "I goofed up and had to copy libC from another machine, it
took five minutes" is bad. "I goofed up, had to reboot from boot floppies,
needed to re-install part of my OS, and hunt down my backup tapes" is
a fu#!king disaster.

The system should strive to guarantee the availablity of anything 
that you might need in single user mode, and you are much more able
to guarantee that when it's statically linked. 

Some people here are trying to separate "repair the system" from 
"upgrade the system" but those operations are so fundamentally 
similar, and so likely to be the cause one another, that I don't
really see the point in the distinction.

Please help me understand what the advantages of a dynamically 
linked "restore" command is, after thinking very carefully about
why you might want exactly that command in particular to be static.

> Dynamic linking only breaks when there is something wrong. Building a
> distribution is a coordinated integration task, and when all of the
> pieces-parts aren't compatible for one reason or another problems like the
> recent bash failure show up...and then we fix it.

Presumably you think it is one so foolproof that it catches every single
possible or imaginable bug--there is not one left when Debian is declared
stable, because Debian is so cool that bugs are just not allowed.

It is also so foolproof that it absolutely and totally prevents me from
doing something stupid that brings down my own system.

Or at least, you think that if the system goes down because of some 
mistake I make, it is OK not to help me get it back up again.

Yes: I make mistakes. I have hosed a system or two in my career, and
I was damn glad I was able to fix it.

So do Debian's developers and testers--we are all human, you know.

Justin


Reply to: