[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Build process for cloud images



Hi all,

We all agreed that cloud images have to be built on the Debian infrastructure
and we are working towards it (still). Below I'll try to explain how I see the
whole process and which parts I started working on and where I am with them.
Also please forgive me that it didn't progress as some (including me) might
expect but some work and family related issues arise and I'm in the process of
changing work etc., so there is some chaos in my life atm.

Cutting to the point:
1. we need to build images on Casulana (new build server) with FAI within
  chroot, docker container or vm as FAI require root and I (and I'm pretty
  sure Steve and DSA will back me up with this) don't want to run it on the
  host.

  I've tested FAI with the all above and have to say that the easiest way of
  running it is going to be a vm due to need of (re)mounting images etc. Access
  to /dev is needed what even with elevated perms in docker is not enough.
  I think I was close to have build process running in docker but didn't have
  enough time to fix all the issues and time is flying so we need something
  which will work before Stretch release, thus I tried building GCE image with
  FAI and libvirt. Built was successful but ... (more about it later)

  This point is probably only common part of the build process for all builds
  (meaning all clouds providers).
2. Once we have #1 completed we need to transfer/register img from Casulana to
  the specific cloud provider. Different providers registering/importing images
  differently I have access only to AWS and GCE thus I'm going to write only
  about those two.
  To complete this step we need intermediary machine in each cloud, reason for
  this is that if we won't have security implications will probably require
  that we have additional account (per cloud provider) just for the sole
  purpose of building images (what IMO is overkill).
  Without this additional machine Casulana would have to be able to create,
  destroy and register instances and images across all cloud providers which is
  IMO less then acceptable for me.
  Also having build logs and all needed components under our direct control and
  in one place is something we need to have to be sure nobody tempered with the
  build process and to be able to easily expose it with the get.debian.org.
  Even if this (build process) looks a bit over protective this would allow
  people to feel more confident that we have whole process under Debian
  control.
  With intermediary Casulana would only hold SSH key for accessing it. This
  would allow for ex. on AWS for Casualna to transfer build artifact to
  intermediary which would mount EBS and unbundle whole imaged into it, then it
  would trigger awscli to bundle AMI and register it. Intermediary would have
  assigned IAM role allowing it to perform only needed tasks including spinning
  up and destroying specifically tagged instances for image testing.

  Similar process would be applied to GCE and this one I've already tested, it
  slightly differs from AWS as on GCE there is a option to created image
  directly from raw image and I used Cloud Store for storing FAI built image.
  After successful built img has been bundled into tarball and transferred into
  Cloud Store and then image has been imported into GCE with the gcloud
  command. This is very simple procedure and is working reliably as long as all
  permissions and access credentials match.
3. Testing images; based on what we agreed on the cloud sprint I wrote simple
  script (very ugly and tailored to my setup at this point in time) which is
  spinning up new instance of the new image registered in #2 and executing list
  of tasks[1] outcome of those tasks is available from stdout but plan is to
  have it machine readable (so json output probably) and then to probably to
  have simple web page with the build and tests status.

I know that above 3 phases can be streamlined to fit a bit better with each
cloud provider but settling on common denominator (building with FAI on
Casulana) would allow more people to contribute and work on the project.
Also I see this split as sort of abstracting build process and splitting build
and registeration (cloud dependent bit) allowing people without cloud specific
knowledge to do most of the work, thus need for intermediary server as well.

In regards of building images progress what I managed to do is to build Stretch
image with FAI and import it to GCE, problem is it didn't boot and I had no
time for last month really to sit on to sort it out.
Only diff between what I did and what Jimmy did during sprint is that he was
using Jessie and I used Stretch (at least this is what I think) but atm I'm
really short of time to investigate.
I should have a bit more time to try to fix it after next weekend but if until
then somebody will sort it out I'm more then happy to take on the work for
the Casulana and intermediary server side which is also needed as we want this
to work for all clouds we want to support.

1. https://wiki.debian.org/Cloud/Testing

PS.
If this email is a bit incoherent in places, I'm sorry for this (that's my life
reflection atm :-) )
-- 

|_|0|_|                                                  |
|_|_|0|                  "Panta rei"                     |
|0|0|0|             -------- kuLa --------               |

gpg --keyserver pgp.mit.edu --recv-keys 0x686930DD58C338B3
3DF1  A4DF  C732  4688  38BC  F121  6869  30DD  58C3  38B3

Attachment: signature.asc
Description: PGP signature


Reply to: