[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Building cloud images using Debian infrastructure



Hi

I again did some work on building the cloud images.  I did work on how
to schedule builds, how to perform builds and how to get data where we
need it.  The whole thing uses infrastructure Debian provides, the
exception is the image release step.

The components are:
- salsa.debian.org with GitLab, responsible for storing code, scheduling
  builds, storing logs and integrity details for eternity and storing
  image data for a short amount of time.
- casulana.debian.org with kvm, responsible for providing the
  environment to perform builds, perform tests.
- Somewhere for storing data with user access.
- Some environments may need additional systems (EC2 can only use
  snapshots of existing disks, so the image release step must run
  there).

All builds are orchestrated by GitLab CI, running on salsa.debian.org.
All the build definition resides in the same Git repo as the FAI config
and scripts, so as much information as possible is distributed in one
place.  Builds are split up into jobs, each run independently and
depending on another.  For now I'd like to define the following jobs:

- Build images for all supported environments and dists.

  On each push, regardless of the branch, a subset of these builds is
  performed.  All the other builds are only performed on the scheduled
  runs.

  On casulana we should be able to do a full set of builds in about 10
  minutes.  Right now the compression of the build result dominates the
  required cpu time, as it uses xz -9 to get the output down to a usable
  size for storage.

  Each build runs in a scratch Docker environment via a special
  configured GitLab Runner.  The builds need access to loop devices,
  which is by default not allowed.  (Yes, I'm aware that neither Docker,
  nor GitLab Runner, have suitable versions in Debian Stretch.)
  
  Some of these builds will currently fail (Azure on Buster, GCE on
  Buster and Sid), so there status will be ignored.

  I have a test repo that performs this operation already:
  https://salsa.debian.org/waldi/fai-cloud-images/

  Full build:
  https://salsa.debian.org/waldi/fai-cloud-images/pipelines/3155

- Tests for images.

  I'm not sure if scheduled builds should perform detail tests on all
  platforms, or if this should be restricted to releases and explicit
  triggers.

  At least we should do a minimal test and look if the system boots to
  userspace (call qemu and read serial output until systemd shows
  itself; this weeds out broken bootloader, kernel and filesytem).  Even
  without kvm this takes less then 30 seconds, so is an easy to perform
  test.

- For all scheduled runs, upload images and metadata to a user
  accessible storage.

- A manual job to release the images.  This triggers a pipeline from a
  different project.  This new pipeline contains the following jobs:
  
  - Upload image to platforms.
  - Test new instance using the images.
  - Publish images.
  - Notify debian-cloud@.

While this proposal introduces some complexity, it uses common
components and does all the special stuff itself.  It allows a user to
setup his own environment with only a small amount of effort:

- Setup GitLab or use an existing instance.
- Setup a GitLab Runner with Docker to run the builds, using some
  documented config options.
- Configure some build options.

Regards,
Bastian

-- 
Our missions are peaceful -- not for conquest.  When we do battle, it
is only because we have no choice.
		-- Kirk, "The Squire of Gothos", stardate 2124.5


Reply to: