Re: Conflict/dependency granularity
Bill Mitchell writes:
>Not so much underestimating their intelligence as making a
>judgement about their inclinations.
>administration, and don't want to know. They just want to use
>the machine as a tool. They don't want to maintain the tool).
Fair enough. But how much are such people going to be installing
random software grabbed off the net? If that software isn't
sufficiently well set up to install somewhere sensible - and
/usr/local/bin is going to be more sensible than /usr/bin on any Unix
I know of - then I think there's a good chance they'll have plenty of
other problems with it anyway.
>What we're doing with our packages is analagous to what's done
>in the DOS/Windoze world with shrinkwrapped applications, but
>complicated by dependencies between our packages and conflicts
>between packages we provide and with packages and applications
>provided by others.
Such conflicts still exist in DOS and Windows - it's just that there's
no standard mechanism for finding out about them.
>The point, though, was not the brokenness of textutils fmt and
>the possible brokenness of cpio mt, it was the complications which
>grow out of using the alternates. You seem to be arguing that
>no alternates should be provided in the distribution unless they
>are alternates on a package-granularity basis, and that we should
>not support use of such alternates. There lies the realm of
>"undefined behavior" [insert Twilight Zone theme].
Eh? The point I was making in particular was that the specific
examples you gave weren't helpful since there was an obvious better
solution in each case, and therefore didn't support your argument very
well. I didn't make any remarks at all about your conclusion outside
the context of these individual cases.
But: I think there is a very strong case to be made for individual
packages being `plug and play' - no messing around looking for some
new version of mt to patch in on top or anything like that. I'm
simply not convinced that the examples you give demonstrate a real
>One issue which I didn't discuss is backwards compatability. Once
>several debian releases with collections of packages are floating
>around, there'll be attempts to install new packages to systems
>with an older installed-package population, and older packages
>to new systems. Virtual packages will help out there, but I
>doubt that it'll be a silver bullet. It won't help, for example,
>in the case of a package version from before the appearance of
>whatever virtual package it needs to be declaring a dependency
>on, or the installation of a new package depending on a recently
>appeared virtual package name onto a system populated from before
>the appearance of that virtual package. File dependencies might
>be a solution with fewer problems in some of these cases.
A good point, but file dependencies could still cause problems. How
do you depend on /usr/lib/gcc-lib/i486-debian-linux/2.6.3/libgcc.a,
for example? That version number is going to change one day
... perhaps likewise the 'i486', and even the 'linux' if there were a
Hurd-based Debian. In this example, a virtual package would win.
OK, it's probably a specious example; but it makes my point more
easily than explaining it all out. It may be that this turns out not
to be a real problem after all; or perhaps there's some neat solution.
>>You could name virtual dependencies after files, perhaps? (This idea
>>needs some development, but it may be the way to go to achieve what
>>Bill wants without someone having to do a lot of work on dpkg.)
>I'm trying to divorce requirements definition from implementation
>concerns here, so that requirements drive implementation, not the
>other way around.
Well, I think the idea I suggest above may provide a way of exploring
this need without having to produce any new code. Experimentation is
a good way of learning more about what the requirements really are,
rather than what we think they are.