[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Building cloud images in sandbox VMs



Le 11/10/2017 à 16:15, Marcin Kulisz a écrit :
> On 2017-10-11 15:18:10, Thomas Lange wrote:
>>>>>>> On Fri, 29 Sep 2017 07:22:15 +0100, Steve McIntyre <steve@einval.com> said:
>>
>>     > Building
>>     > --------
>>
>>     > any further. We will need to look into tools for making new VMs.
>> I wonder what is meant by "making new VM".
>> You do mean creating the disk image for the VM or starting the VM with
>> a tool like virsh?
> 
> If I recall correctly this is about creating ephemeral vms (possibly from
> template) on demand to used them as build machines for cloud images.

I had a look at various possible tools which could make that possible,
here is a short summary.
If people have more details, please share, not flame.

Background reason: you need root rights for most of the build tools, and
the cduser on the build server is an unpriviledged user.
So we want to use sandbox VMs for the builds.

Might help your local builds too (see
https://lists.debian.org/debian-cloud/2017/09/msg00026.html)

Overview of tools for Building cloud images in sandbox VMs:

1) Vagrant:
First tool I looked why was Vagrant. It is a basically a scriptable
wrapper of different virtualization technologies, to build stuff in VMs.
It also out of the box syncs the local dir the a /vagrant dir in the VM,
so you can work locally and build in the VM.
It would be the ideal but the three FLOSS virtualization providers
Vagrant proposes has all limitattions.

A) * Vagrant with libvirt provider, on top of qemu: this needs the
libvirt qemu://system priviledge, which is basically root rights,
presumably tocreate tap devices inside the libvirt network bridge and
access all kind of blcok devices. Upstream is not interested in
supporting non-priviledged vagrant-libvirt, see
https://github.com/vagrant-libvirt/vagrant-libvirt/issues/272

B) * Vagrant with VirtualBox provider: works well but you need to bring
a couple of extra kernel modules on the host, and Virtualbox itself is
in contrib. From what I heard, F-Droid is using that Vagrant +
VirtualBox in their build infrastructure.

C) * Vagrant with LXC provider: does not provide enough ressource
isolation since we need to to do loopback mounts and you can't do that
without being priviledged in a way. Same problem for Docker.

2) Autopkgtest:
I also had a look at how autopkgtest VMs build work. Autopkgtest start a
pre configured VM, connect to the Virtual Serial port of the
VM, sends inline python over that channel to mount a shared file
9psystem. Gory details in setup_baseimage() in autopkgtest-virt-qemu.py
This inline python is a bit ugly to understand, but it looks like this
might be abstracted enough to run any kind of builds.
I wrote Martin Pitt, author of autopkg test if he can give his opinion.

3) Fai + libguestfs:
There is this nice library called libguestfs, which allows to do anykind
of file system and disk work via a programmable interface, all taking
place in a qemu sandbox. Basically it abstracts the work
of what autopkgtest does manually.
Might be interesting to look if fai could use libguestfs. Instead of
doing those loop mounts ourselves, we can programmatically tell qemu
"hey here is a disk image, mount it and do stuff inside" and we don't
need to be root anymore.

Libguestfs has a very cooperative upstream, is well documented, and has
bindings in many programming languages.
See http://libguestfs.org/guestfs-perl.3.html for how to use this in perl.

4) virt-install:
At some point if we need to start qemu to run a deboostrap wrapper
inside, it might make sense to start the debian installer on a fresh
qemu vm, and do the extra configuration either via late-install scripts
or after the disk image is create via libguestfs.
Works very well and quite easy to start with
( see an example
https://anonscm.debian.org/cgit/cloud/debian-vm-templates.git/tree/virt-install-generic-qcow2/build.sh
)

I am not sure however if this fits well with the amount of
configurability we want, late-install scripts are hard to debug.
However post VM creation customization can be easily made with libguestfs.

5) homegrown solution:
since basically what we want is a scriptable VM where a share a build
dir to be run, and control that over ssh, I had a look how complicated
it would be to do that using the libvirt api.
well actually this is not complicated at all, I had a 'vmake' proof
of concept in a few hours.
https://gist.github.com/EmmanuelKasper/2a99f3f67afc3ac1100affbf4630d9d1


instead of calling

make -f Makefile ec2-stretch-image.raw

you do

vmake Makefile ec2-stretch-image.raw

and you got your VM build in the build dir.

My prefered solutions would be 1)B), then 3) as long as it does not
imply even more shell and less high level language, I can help to
implement those solutions.


Reply to: