[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Modern Debian packaging system for DevOps: does it exist?



 ❦ 15 mai 2015 08:19 +0100, Neil Williams <codehelp@debian.org> :

>> For some packages, installing the dependencies can take more time than
>> building the package. 
>
> An inevitable cost of building software that has a significant stack of
> dependencies. However, each of those dependencies needs to be cleaned
> up or the build will generate erroneous binaries.
>
> Use a cache if download speeds are a problem (less likely with modern
> network connections).

Not a problem.

> Use an SSD if installation time is a problem.

Not a problem either.

A good timesaver is to have libeatmydata or similar. But this is still
slow. For example, the manual page step.

>> This is not a dirty container. 
>
> Sorry, it is dirty. It just is. It's dirty in the worst possible manner
> - stuff directly relating to the builds you care about is going to end
> up out of date or possibly even corrupted by a mis-configured build.
> There's nothing "modern" about debugging issues arising from dirty
> containers, it's completely unnecessary and a false economy.
>
> There are good reasons why all of the existing packaging systems use
> clean environments and either remove all build-dependencies beyond
> build-essential or throw away the dirty system and replace with a
> fresh snapshot, again with only build-essential installed.
>
> Any new build system which deliberately forgoes these lessons deserves
> to be ignored and sent to the a permanent home in /dev/null.

Yeah, sure.

The good reason to replace with a fresh snapshot is that usually, a
package is not rebuilt several times in a row. We just workaround
that. Either we work locally and validate with a pbuilder or we work
directly into a shell spawned by pbuilder or we do something else during
the builds.

Docker is a good framework to introduce the ability to have prepared
build environements for a package:

 1. You have your base image "dockerbuild-sid", rebuilt every night
    (either from scratch or using apt-get dist-upgrade from the previous
    iteration, exactly like we do now).

 2. When you need to build the package foo, you create the image
    "dockerbuild-sid-foo" from "dockerbuild-sid" by installing the build
    dependencies.

 3. The real build of the package foo is done in a container based on
    the image "dockerbuild-sid-foo". All changes made by in this
    container are discarded. The image "dockerbuild-sid-foo" is left
    pristine.

 4. Once a day, you clean your intermediate images.

>> Only the dependencies needed for the packages are retrieved. If the
>> build environment for the package doesn't exist, a new environment is
>> created. Old environments are removed after a day. Something like
>> that.
>
> A container where the dependencies remain installed is a dirty
> container. What happens when there is a transition in one and someone
> forgets to update the dependencies to save time?

"Old environments are removed after a day". And also, the problem is
exactly the same with pbuilder and the package cache. Until you update
your base image, you can build in an outdated environment because
pbuilder will find the outdated dependencies in the package cache.
-- 
Watch out for off-by-one errors.
            - The Elements of Programming Style (Kernighan & Plauger)

Attachment: signature.asc
Description: PGP signature


Reply to: