[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Porting gnupg2



Hi,

2018-06-16 12:47 GMT+02:00 Giovanni Mascellani <gio@debian.org>:
> Hi,
>
> Il 16/06/2018 12:02, Manuel A. Fernandez Montecelo ha scritto:
>> As a rough estimate, qemu is about 20x times slower than modern
>> hardware.  E.g. if the build takes 5 min in arm64, it takes 1h on one
>> of our buildds; if it takes 2h in arm64 that's two days in riscv64.
>
> Out of curiosity, how does this compare with the SiFive board? Also, why
> do you compare with arm64 in particular?

It depends a lot on the package in question, how much time it spens
getting and installing dependencies, doing ./configure, compressing
and creating the package and tarballs, and other single-thread parts.

I can tell you for example the time to build vlc last week, clock set
to 1Ghz (the default), building entirely in memory using the 4 cores,
and building all packages (arch:all too, if there's any):

Build Architecture: riscv64
Build Type: full
Build-Space: 1372980
Build-Time: 3521
Distribution: unreleased
Host Architecture: riscv64
Install-Time: 366
Job: /run/vlc/vlc_3.0.3-1+0.riscv64.1.dsc
Machine Architecture: riscv64
Package: vlc
Package-Time: 3935
Source-Version: 3.0.3-1+0.riscv64.1
Space: 1372980
Status: successful
Version: 3.0.3-1+0.riscv64.1
--------------------------------------------------------------------------------
Finished at 2018-06-08T17:15:37Z
Build needed 01:05:35, 1372980k disk space


By comparison, the last builds in arm64 took  14min (same version,
using parallel=6) and 21min 3.0.3-1+b1 (with parallel=3).

In amd64, it took 10 mins, with parallel=4.  In ppc64el 7 mins, with parallel=8.


> Did you mean amd64?

No, I really meant arm64.  Why?  Because I started using those as a
measure and multiplying by 20 to try to guess how long it would take
to build, and it stuck :)

arm64 is relatively recent and I know when they were added, and I
think that if they have been upgraded, all the buildds were upgraded
at the same time.  Other arches probably saw a gradual upgrade over
the years and maybe there's a mix (this happens at least with the mips
ones).

ppc64el could have been another one, but it's too fast, so the
magnitude of errors estimating tends to increase too.

And overall, because I think that the RISC-V chips from SiFive would
aspire to be broadly in the same range of arm64 (or mips) for
dev-boards or power-contained systems; pcc64el is too high-end, and
x86 is also not fit for this segment.


>> Having a single board at this point for that not sure if
>> possible/feasible as a host part of the buildd set, I think, because
>> having too much difference in buildds can cause other sets of
>> problems.  (It's not feasible/easy to divert packages to specific
>> buildds).
>
> Not that my opinion matters much, given that I began fiddling with
> riscv64 yesterday and will probably have little time in the future to do
> more, but if real hardware was available, I would consider using it
> (also) as a porterbox, given that human time is much more expensive than
> CPU time.

Each individual current buildd based on qemu-system is slower, but
with 20 of them the situation is quite good and they're idle most of
the time, except at the times of big transitions or combined KDE
uploasd when they are all building packages for many hours or a day or
so.

As Bdale says, these boards are not very good as buildds.  Also at the
moment it's a bit brittle, for example sometimes it locks; if one
wants drivers or special features it has to compile kernels from
riscv/sifive repos, etc.

We're using it mostly to test fixes and get packages' dependencies
unstuck quickly.  If someone needs access, please ask.  I hope to be
able to set it up as more general porterbox in the future; and
hopefully with time hardware will be less expensive/scarce and we can
have more of it :-)


Cheers.
-- 
Manuel A. Fernandez Montecelo <manuel.montezelo@gmail.com>


Reply to: