[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Results of the meeting in Helsinki about the Vancouver proposal

On Wed, Aug 24, 2005 at 02:13:50PM +0200, Peter 'p2' De Schrijver wrote:
> > * Many packages don't support cross-compiling, and those that do may
> >   have bugs in their makefiles that make cross-compiling either harder
> >   or impossible.
> > * You can't run the test suites of the software you're compiling, at
> >   least not directly.
> > * There's a serious problem with automatically installing
> >   build-dependencies. Dpkg-cross may help here, but there's no
> >   apt-cross (at least not TTBOMK); and implementing that may or may not
> >   be hard (due to the fact that build-dependencies do not contain
> >   information about whether a package is an arch:all package or not).
> scratchbox solves these problems.

As does distcc; that wasn't the point, these are just issues that occur
with cross-compilers.

> > * By using a cross-compiler, by definition you use a compiler that is
> >   not the same as the default compiler for your architecture. As such,
> >   your architecture is no longer self-hosting. This may introduce bugs
> >   when people do try to build software for your architecture natively
> >   and find that there are slight and subtle incompatibilities.
> > 
> I have never seen nor heared about such a case. IME this is extremely
> rare (if it happens at all).

Do you want to take the chance of finding out the hard way after having
built 10G (or more) worth of software?

This is not a case of embedded software where you cross-compile
something that ends up on a flash medium the size of which is counted in
megabytes; this is not a case of software which is being checked and
tested immediately after compilation and before deployment. This is a
whole distribution. Subtle bugs in the compiler may go unnoticed for a
fair while if you don't have machines that run that software 24/7. If
you replace build daemons by cross-compiling machines, you lose machines
that _do_ run the software at its bleeding edge 24/7, and thus lose
quite some testing. It can already take weeks as it is to detect and
track down subtle bugs if they creep up in the toolchain; are you
willing to make it worse by delaying the time of detection like that?

I'm not saying this problem is going to hit us very often. I do say this
is going to hit us at _some_ point in the future; maybe next year, maybe
in five years, maybe later; in maintaining autobuilder machines over the
past four years, I've seen enough weird and unlikely problems become
reality to assume murphy's law holds _quite_ some merit here. The
important thing to remember is that this is a risk that is real, and
that should be considered _before_ we blindly switch our build daemons
to cross-compiling machines.

I'm not even saying I oppose to using cross-compilers; it's just that
the idea of "slow architectures' build daemons are slow, but luckily
there's an easy solution; we can replace them by fast machines that do
cross-compiling" is blatantly incorrect.

> The only way to know if this is a real problem is to try using cross
> compiling and verify against existing native compiled binaries.

That's not much help. We need to test this continuously, not just once a
tiny little bit and then never again. If you need to compare against
natively built packages, you'll need build daemons anyway; so what's the
point then?

> Unfortunately the verify bit is quite annoying as a simple cmp will
> likely fail because of things like build date, build number, etc
> included in the binary.

Well, that makes it even less of a help.

> For packages which have a testsuite, this testsuite could be used as
> the verification step. 

Sure -- but the number of packages that have a reliable testsuite is
miserably low.

> > Hence the point of trying out distcc in the post to d-d-a; that will fix
> > the first three points here, but not the last one. But it may not be
> > worth the effort; distcc runs cc1 and as on a remote host, but cpp and
> > ld are still being run on a native machine. Depending on the program
> > being compiled, this may take more time than expected.
> Which is why scratchbox is a more interesting solution, as it only runs
> those parts on target which can't be done on the host.

I'm not so sure I agree with you on that one. Speed is just one part of
the story; quality is another. The more you run natively, the more bugs
you'll find.

But I guess I can have a look at scratchbox before I say no to it.

The amount of time between slipping on the peel and landing on the
pavement is precisely one bananosecond

Reply to: