[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Question about Debian build infrastructure



On Sat, 2019-06-08 at 02:25 +0200, Matthias Klumpp wrote:
> Am Do., 6. Juni 2019 um 20:33 Uhr schrieb Kyle Edwards
> <kyle.edwards@kitware.com>:
> > 
> > 
> > Hello all,
> > 
> > I have been preparing Ubuntu releases for CMake on our own APT
> > repository for several months now. We did this by preparing our own
> > repository infrastructure - we have a machine that builds packages,
> > and
> > a machine that hosts an Aptly instance and pushes the repository to
> > our
> > web server. However, all of these things had a lot of manual steps
> > to
> > set up and use, and I'm wondering how Debian handles this problem.
> > Here
> > are some of my questions:
> Hello! :-)
> I am not involved with either the ftpmasters team or the wanna-build
> admins, but I did set up infrastructure like this a few times
> already.
> For the last time for the PureOS derivative and for the first time
> for
> the Tanglu derivative where the goal was to explicitly "use what
> Debian uses", so I gained quite a bit of insight into the process.
> Some information I have may be dated by now though, so please correct
> me in these cases.
> 
> > 
> > 1. What software do the official Debian repositories use? Do they
> > use
> > Aptly or reprepro or something else?
> The main Debian repositories use dak, the Debian Archive Kit.
> You can find more information on it at [1] and get its code on Salsa
> at [2].
> Dak was written specifically for Debian's needs and was in the past
> quite Debian-specific with lots of hardcoding of Debianisms (like
> suite names) and expectations on the host environment. This has
> changed quite a bit in the past few years, and while there are still
> a
> bunch of Debian-hardcoded parts, dak is generally useful for
> non-Debian repositories as well.
> Its setup is still orders of magnitudes more complex than using
> reprepro or Aptly (you will need to write some dedicated scripts for
> your distribution), but for huge package repositories it is from my
> experience one of the most performant and painless options. You also
> gain all the features the Debian archive has instantly.
> If your repository is small though, using dak may be overkill - it
> really shines if you have thousands of packages, with just a few
> reprepro could get the job done easier with less manual work.
> 
> > 
> > Is there a downloadable OS image which comes with this pre-set up?
> No, unfortunately not.
> 
> > 
> > Does it run on a cron job or does it
> > have some sort of continuous monitoring? (We have ours run on a
> > cron
> > job every 10 minutes.)
> Dak actions are triggered by multiple cronjobs which run different
> actions. There is one to process incoming uploads which runs roughly
> every 15min, hourly, daily, weekly and monthly cleanup and
> statistics-generating actions as well as the "dinstall" task which
> will run about 4 times a day and publish new packages in the archive
> so users can access them. The dinstall task is actually comprised of
> many individual actions, which deal with different aspects of the
> archive (e.g. translations and AppStream metadata), so summarizing it
> is not that easy.
> In order to not have the autobuilder network wait on publication, dak
> can maintain special queues for the builders to fetch packages from
> prior to them being published officially.
> 
> > 
> > 2. According to https://wiki.debian.org/BuilddSetup, there seems to
> > be
> > a distinction between the build broker (wanna-build) and the build
> > workers (buildd). Do either of these roles have their own OS images
> > one
> > can download?
> AFAIK there are no OS images, but I would bet that the buildd
> machines
> have an Ansible recipe or something similar somewhere, as those are
> continuously updated and refreshed. I would strongly advise against
> using wanna-build for anything - when I tried to use it in the past
> that attempt turned out to be virtually impossible because there was
> no documentation on it and it heavily relied on grown structures
> within Debian itself. If you dig into it, you will also find some
> interesting historical trivia, e.g. apparently in the past an
> autobuilder was building a package and then sending the build log to
> a
> developer, which then looked over it, signed it and submitted it back
> to get the build actually accepted in the archive.
> So, IMHO wanna-build is really not something that should be used in
> new projects...
> 
> > 
> > 3. I understand that source packages are signed by developers
> > before
> > being sent to the build farm, but what about the binary packages
> > built
> > by the build farm and uploaded to the repository? Do the build farm
> > servers have their own GPG keys?
> Indeed they have - they sign the package with their own key which is
> valid for binary uploads of the builder's respective architecture.
> 
> > 
> > Does the repository server recognize
> > these keys?
> Yes, they do - builders are registered with dak as well.
> 
> > 
> > Thanks in advance, all this info would be helpful for me as I
> > expand
> > our Ubuntu build infrastructure.
> I don't know how about the scale of your build farm, but if it is a
> small-ish amount of packages, using Jenkins+reprepro may be all you
> need. For bigger things, I had some success with Debile[3] in the
> past
> which was building all of Tanglu for a while and was used within
> Debian for builds and tests. Unfortunately that project seems to be
> dead now.
> Launchpad and the Open Build Service[4] may also be very interesting
> options. The Open Build Service comes with a few of its own problems
> (by being designed for RPM in the first place), but in general it is
> very neat and may be exactly what you want for this usecase - it's
> definitely a solution to look into, and I know some people use it to
> manage repositories for internal deployments.
> 
> When making Tanglu we threw a lot of Debian code and new code
> together
> and came up with a Debian-like solution in the end, complete with
> non-reusable now Tanglu-specific parts. After I learned that a lot of
> people actually want to set up some more advanced repository
> management and I also had to do all of the things again for PureOS, I
> started to develop a software called "Laniakea"[5] out of the code
> that was already running Tanglu.
> Laniakea is (currently) built around dak and takes over a lot of
> archive maintenance and QA tasks, including package building. For
> package building, Laniakea has its own master server and a worker
> software, which in turn builds packages in systemd-nspawn containers
> via debspawn[6].
> Laniakea is a complete suite of tools which one day should make
> setting up a new Debian derivative a matter of few commands, but
> sadly
> we are not there yet and using it is still a bit experimental - there
> is just too many changes still happening in the project, and test
> coverage lousy.
> (With the exception for debspawn, that's a handy tool).
> 
> So, tl;dr, maybe reprepro/Jankins/Aptly is actually what you need
> here
> (assuming you don't have hundreds of packages). If you want, look at
> Open Build Service, and if you feel adventurous try out dak &
> Laniakea.
> 
> Hope this helps! (this reply in the end contained a lot more opinion
> pieces than I originally wanted to have)
> 
> Cheers,
>     Matthias
> 
> [1]: https://wiki.debian.org/DebianDak
> [2]: https://salsa.debian.org/ftp-team/dak
> [3]: https://github.com/opencollab/debile
> [4]: https://openbuildservice.org/
> [5]: https://github.com/lkorigin/laniakea
> [6]: https://github.com/lkorigin/debspawn
> [7*]: Laniakea's web UI can be viewed at https://master.pureos.net/
> and https://software.pureos.net/ to get an impression of what it
> currently does.
> 

Matthias,

Thank you for all the info! We will certainly not be building hundreds
of packages, maybe a dozen at most. We are currently using Aptly with a
cronjob that processes incoming packages every 10 minutes and then
rsyncs the published repository to our web server. Getting it set up
was a lot of manual work, and that's why I wondered if there was some
existing image that could do everything we're doing.

My actual build process is currently a Docker image that fetches the
CMake source and builds an Ubuntu package out of it. At the moment I
run this manually, but at some point I might integrate it into our
Buildbot setup. My end goal (some day) is "push button, get CMake
release" with as few manual steps as possible.

If you're using GPG on the published repository, how does your
repository server handle its signing GPG key? Does someone have to type
in a password every time it wants to publish a package, or is it
unattended, with either an unencrypted private key or a passhprase
file? Does the key live on the same server as the repository, or do you
have a dedicated signing server? Keeping unattended GPG keys secure is
tough...

Kyle


Reply to: