[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RFC: Patches for supporting cross-building

Hi Baurzhan,

On Tue, Jun 19, 2018 at 02:28:17AM +0900, Baurzhan Ismagulov wrote:
> > We're seeking a hardware sponsor for running a cross buildd. That's why
> > we don't have any public CI at the moment.
> Does it mean donating hardware or renting a server on some ISP?

Actually acquiring a piece of metal is not an issue indeed. I could even
offer one. Hooking something (physical or virtual) up to power and
network, operating it and ideally helping maintain the cross buildd is
the missing piece.

So what kind of resources are required?

Cross builds skip test suites and a significant fraction of packages
fail. Thus we only need a fraction of the build power of a regular
buildd even if we run it for multiple architectures. For just building
packages an arm soc could do. In my experience, speed depends more on
available RAM or fast storage (i.e. tmpfs). 3GB / concurrent build seem
like a reasonable minimum to me (and one concurrent build should be ok

What takes more resources is running dose-builddebcheck to determine
which packages can be built. On a reasonably fast system, dose takes 3
to 10 minutes to check an architecture. With 9 reasonable cross build
targets, that's typically one CPU hour every 6 hours (mirror push). Also
each dose process eats around 1GB of RAM. Unlike native builds, the
whole archive can suddenly become bd-uninstallable, so we really need
current dose results. A significant portion of the resource consumption
will be dose.

The other piece of course is proximity to a Debian mirror. In my
experience, a 100GB apt-cacher-ng works fairly well. Given the rapid
change in installability, knowing when the mirror is pushed is helpful
in avoiding transient failures.

I don't think a cross buildd should retain binary packages. What should
be retained is (compressed) build logs. Once .buildinfo files work with
multiarch, retaining them might make sense as well. Still, it'll take a
while until those logs exceed 1GB of storage.

Finally, I don't know any software that one can just set up and do all
the things. What I'm running locally is glued together and kinda works,
because I read every single failing build log. Thus I can sort out
transient failures and retry builds after rerunning dose. Help with
developing and maintaining the tooling is also needed.

How many builds are we talking about?

Presently around 7000 out of 13000 source packages have satisfiable
cross Build-Depends. In my (biased) sample, around 70% cross build
successfully, but reality will be worse. We can assume that roughly
every other cross build fails. If we manage to find 10 contributors each
fixing one package a day (significant overestimate), then performing 30
cross builds / day will suffice for quite some time. I think even the
m68k buildds manage that kind of load (excluding dose).

Having multiple build nodes of different architectures would be a bit
attractive, because that'd allow cross building for amd64 or i386. But
maybe defer that part.

Hope this helps


Reply to: