[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Layers for the package manager



On 12-11-2016 18:18, Nicolas George wrote:
> If I understand correctly how Docker works, its images are big blobs
> that contain the program they are meant to distribute plus all its
> dependencies. Am I mistaken?
>
> If it works like that, that means when the next OpenSSL security issue
> is found, we have to cross our fingers very tightly and hope whoever
> released the image will release an update with a fixed library. With
> what I have in mind, unless the maintainer of the third-party repository
> did something very wrong, its packages will be dynamically linked with
> OpenSSL from the base system, and benefit from the updates immediately.
>
> It makes a big difference: in one case, you have to trust the third
> party to do a good job and continue that way in the future, on the other
> case you only have to trust it to do a not-bad job once.
>
> Personally, I would rather unpack a dynamically-linked binary somewhere
> in /opt and install the dependencies myself than use a package system
> with bundled libraries. Or, of course, install from source.
>

The end result is indeed, for practical purposes, a big package with
everything needed to run an application. But I don't expect you to
simply download pre-built blobs, but rather to build your own.

It's easier to understand from an example. Take a look at this
Dockerfile:
https://gist.github.com/rmoehn/1d82f433f517e3002124df52f7a73678 .
Basically it says "start with a minimal debian stable system, install
some packages, then make some necessary configurations". Docker
downloads the base image (the minimal debian system), and executes the
rest of the commands to create a new image that includes the installed
packages, resulting in a self-contained image that allows execution of
the application in question.

[That's a huge simplification, but the end result is roughly as
described. We need not concern with the internals.]

[If you don't want to trust the creator of the base image, you can
create your own.]

I've used that Dockerfile to run anki (which is currently broken in
testing). It's kind of a chroot, but with more isolation. Also, if I
wanted to run another program from stable, I could build another docker
image for that, but due to the clever way docker works, there'd be only
one copy of the base system plus two layers, one for each image, with
only the applications in question. That's a big advantage in relation to
chroots or virtual machines.

And yet, even if a base layer is shared, docker images are completely
isolated from one another and from the host system. If you want to share
data, you need to explicitly configure that.

That does solve the isolation problem, and allows you to run packages
from different repositories simultaneously, with different versions of
libraries if necessary; and allows you to install packages from
untrusted sources (or that are not available as .deb's) without messing
with your "real" system.

It does not solve the problem you mention: if there is an update of
OpenSSL, the images will continue to use the old version unless you
rebuild them. The process can be automated, but at least you'll need to
run a command to rebuild the images, and this can be time consuming.

Now might be a good time to dive into some of the internals, such as how
the images and layers work:
https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/
That might give you some ideas for your solution. Take a look also at
the pages "AUFS storage driver in practice" and "OverlayFS storage in
practice". While you won't be able to do what you want with docker,
perhaps you can get some ideas. I'd guess you'd need some kind of
layering like done by docker, but sometimes changing the bottom layers
(which is not possible with docker - only the topmost layer is ever
changed).


-- 
Eduardo M KALINOWSKI
eduardo@kalinowski.com.br



Reply to: