[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [RFC] managing dependencies with levels



On Mon, May 21, 2001 at 07:33:29PM +1000, Glenn McGrath wrote:
> Thierry Laronde wrote:
> > 
> > 
> > So the Level is computed _at build time_. A package depending on packages
> > that can be built (they depend on packages which Level can not be computed)
> > can not be built.
> 
> Im not sure i understand exactly your idea, but if the levels are worked
> out at build time, then couldnt it would cause inconsistencies later on.
> 
> If packages change around it then maybe the levels will need to be
> recalculated, this would be a problem if the level is distributed with
> the package as it wouldnt know about the required changes.

Well, in my mind, there is no such a problem since the installation tool
will handle the package the very same way as at the moment. The only
difference is that the integrity of the system, and the dependencie tree is
calculated when the package is made (the word built is not really a correct
one, since there are source packages, and binary packages --- a binary
package being a package which contains at least a piece of software that is
specially targeted for a specific arch ; but a source package has already a
level calculated), and not by the installation tool.

The idea is the following. When packages are made, a logical verification is
processed. The system considers packages as a whole (package with all its
dependencies).
The core system is special and is level 0.
Then the build system creates packages which depend on nothing (implied :
which depend only on the core system). These are atoms of level 1.
Then it builds packages which depends on level 0 and atoms of level 1. These
are level 2.
Etc... The process has an end since there is a finite number of packages.

The packages that have not been built have obviously problems. Whether they
depend on a package that is not available, or there is a vicious circle
(they depend on package that depend on another package which depends on the
previous one), that is no one can build a way from low levels atoms to reach
these ones.

The installation tool has now a simpler way to achieve its task, and can be
smaller and faster (even a shell script can install using the levels), and the
consistency of the system is achieved.

At remove time, you can process the other way. You deinstall higher levels
packages, and you decrease the level.

For the consistency of the system, there are other "features".
In this system, a "workstation" asks an authoritative server for packages
servers it is allowed to contact, and for a resource server --- a server
which serves the customized resource files. These servers are services, i.e.
then can run on the very same machine. But, in a network, the administrator
can verify the integrity of the whole since people are not allowed to
download packages from unauthorized sources, and he can centralize the
resources.

Binary packages are tagged with a release version (in fact, the release of
the core system). Machines can download the packages for their release
version, or ask for binary packages built from sources for the release
version. The "dist-upgrade" is decided by the administrator. And the system
commands a build daemon to build special version of a package.

Cheers,
-- 
Thierry LARONDE, Centre de Ressources Informatiques, Archamps - France
http://www.cri74.org
PingOO, serveur de com sur distribution GNU/Linux: http://www.pingoo.org



Reply to: