[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bundling .debs

(Adding infrastructures@terraluna.org to Cc: due to relevancy.  Top of
thread in debian-devel list archives is at

On Mon, Jan 13, 2003 at 04:29:14PM +0100, Martin Schulze wrote:
> Steve Traugott wrote:
> > Hi All,
> > 
> > Does anyone know of any existing tool which bundles a .deb package,
> > and its prerequisite packages, and its related debconf db deltas, into
> > a single archive?  The resulting archive might be a tar.gz, zip, or
> > even another .deb...  
> > 
> > It might be useful to think of this in terms of encapsulating the
> > results of a given 'apt-get install foo' or 'apt-get upgrade', so that
> > the same actions can be "replayed" later on similar machines.  (For
> > reasons why you might want the same actions replayed on other machines
> > rather than whatever apt-get decides to do at some later point in
> > time, see http://www.infrastructures.org/papers/turing/turing.html.
> > Specifically, the technique described in section just now blew
> > up on our machines when woody upgraded -- the available packages all
> > changed.  We're running apt-proxy, but have multiple locations, and
> > the problem of keeping the apt-proxy caches consistent between
> > locations makes the machine builds too fragile, too dependent on
> > environment.)
> I may be wrong, but capturing debconf questions + answers and
> encapsulating .debs in one large file would only work on exactly
> similar configured and setup machines.  

You are right.  There are (at least) two cases where this is important:

- A single production machine which needs to be adequately represented
  by a single development or test machine.  If you can't maintain them
  both exactly the same then you can't depend on the test results.  If
  they aren't identical, then what worked in test might blow chunks in
  production.  (Most people don't spend enough time thinking about
  this one.)

- Clusters, web farms, trading floors, call centers, etc. where you
  want multiple production machines to be the same.  (This is the case
  which people usually think of.)

> However, in such a case, you could simply execute apt-get install
> foo on all such machines with the same results.  Hence, a large file
> won't be required.
> However, if you need to install .deb files from date-XXX later when
> packages were replaced by more recent or security fixed versions of
> those packages, the above won't work.  

Exactly.  Unless you execute 'apt-get install' on all machines at the
same time, identical results are not guaranteed.  There needs to be a
way to encapsulate both the action and the data at that instant in

> In that case, why not use snapshot.debian.net for snapshots?

That's a good idea -- you'd want to *always* install from there
though; it should be the only thing in sources.list.  This would be
inconvenient if you needed to pull from elsewhere for certain
packages.  This would also place too much dependency on an external
party.  Some of my infrastructures have lasted for over 5 years --
will snapshot.debian.net still be there 5 years from now, still
serving 5-year-old packages?  You'd want to cache locally, which then
raises the cache consistency and fragility problems I mentioned above.  

The logic brings us right around in a circle back to the apparent need
to encapsulate the results of the first apt-get, the first time we
fetch a package, so that we can 'replay' the apt-get action on similar
machines, hours or years later.

> If that's not an option either, a small script that parses and handles
> the output of apt-get --print-uris -y install foo and fetches the .deb
> files and stores them in one .tar or directory, sound quite easy and
> sufficient for me.

That's pretty much what I've started doing -- and it does look like a
tar.gz is beginning to make more sense.  Bundling .debs intact inside
a .deb turned out to be impractical -- you can't run 'dpkg -i ...'
from inside the postinst script of a .deb; dpkg complains that there's
already a dpkg running...  Doh!  ;-)

Your idea about unpacking the .debs to build the superdeb might be
another way to go about this.


Stephen G. Traugott 
UNIX/Linux Infrastructure Architect, TerraLuna LLC

Reply to: