[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: -= PROPOSAL =- Release sarge with amd64



[[Note, I'm a bit punchy right now, I'm still at work from friday (two
power outages and some last minute schema changes that are acting up.
I believe I'm coherent, but who knows if I'll believe that tomorrow.]]

> > Let me ask that question a different way: what's the sequence of events
> > leading up to the point where dpkg-buildpackage -a works?

On Sat, Jul 17, 2004 at 02:08:09PM +0200, Goswin von Brederlow wrote:
> You can go too ways. Build a biarch capable toolchain or build two
> seperate toolchains that are multarch capable. Since we have a biarch
> toolchain you might want to start there.

What are the benefits of n-arch (n>1), and how important are they?

Consumer of n-arch are people who have some kind of need for a mixed
system:

   [a] Interim for a 64 bit system where not everything has been ported
   [b] People upgrading a system from 32 bits after replacing mobo and kernel
   [c] developers targetting multiple architecures.

[a] and [b] don't need multiple instances of any binaries other than
ld.so, they do need multiple instances of libs.

[c] needs multiple instances of the binutils/gcc toolchain, in addition
to libraries.  (Also of interest are whatever is being developed, but
the developer is in complete control there.)

Does that sound right?

> The next step is to get the dpkg-dev tools to change their output and
> behaviour according to the -a option used in dpkg-buildpackage.
> Currently the -a option has only a very limited effect. But maybe
> -a<arch> is not the right way to build packages, there a different
> ways.

What about uniarch vs. multiarch?  Multiarch isn't going to let gcc-3.x be
installed where uniarch currently puts it.  So doesn't the gcc toolchain
need some work too?

Frankly, I think I'd leave the dpkg aspects alone till after I had hand
built some multiarch packages.  It's possible to emulate dpkg using basic
tools like ar and vi, and you'll need some test packages to test against.

But what I'm really concerned about is this:  You've given up on biarch,
because it's too much work for too little gain.  But biarch can coexist
with uniarch (and that's what most of the work is about).

I don't see how multiarch can coexist with uniarch.  Or rather, it seems
like uniarch has to be replaced with multiarch before multiarch can be
of any use.

Am I wrong about that?

If that's where we're going, aren't you going to need quite a bit of
cooperation and work from the project as a whole?  Otherwise, you're just
moving libraries around, which makes the whole bin/ aspect of multiarch
moot (you'll only be able to have one architecture's implementation of
any specific executable at a time).

> An alternative to the -a option would be using "linux32", which turns
> the i686 uname emulation on. Linux64 reverts that. dpkg-buildpackage,
> dpkg-deb, dpkg-gencontrol, ... should probably be made aware of the
> uname and change their output accordingly.

But what will they do for these cases?  Is dpkg just managing dependencies
for the case where more than one architecture's packages are installed?
Is there any need for dual-install beyond important shared libraries?

> That would be easier to achive since the -a option would have to be
> passed through debian/rules to any dpkg-architecture or dpkg
> --print-architecture calls somehow. The uname emulation is passed on
> to children automatically.

By uname emulation, I'm guessing you mean making sure that a uname
that answers things the right way is at the head of $PATH?

> So now we have a toolchain able to produce 32 bit and 64 bit binaries
> and a way to tell packages to build for 32bit or 64bit.
> 
> Was that what you asked?

Not really.  You're totally focussing on the needs of debian developers,
and I'm trying to understand the system itself.

> The current state of multiarch can probably be described as
> deciding, testing and implementing how this is going to work exactly,
> what tools to modify in what ways to best achive multiarch
> capabilities.

If the concepts are that up in the air, there's no way that you
can meaningfully assert that it will involve less work than biarch.
Those kind of assertions are like claiming that 1+x < 2 when you don't
know what x is.

I'll agree that the part of multiarch which is known involves a relatively
small amount of work, but without a complete design that doesn't really
mean anything.

But until the specifics have been nailed down, it's too easy to seem
sensible saying both A and B, when A contradicts B at a low level.

> We know how we want the debs to look like at the end but very little
> on how to get them there has been finalized yet.

Maybe you could explain how you want the debs to look like -- in
particular, do you expect that sid is going to have to replace /usr/bin
with /usr/i486-linux/bin and /usr/amd64-linux/bin/?

> > Or do you expect that -a has to work before any other multiarch work can
> > be done?
> 
> You don't need to be able to build packages for multiple architectures
> to work on multiarch. As said before multiarch will change all archs
> to a straight forward unique lib path (${prefix}/lib/`gcc -dumpmachine/).

If multiarch only deals with lib paths, how will you be managing
multiple gccs?  Don't you get stuck with the things you claim you're
trying to avoid from biarch?

> Nothing prevents you from porting packages to multiarch on m68k for
> example.

But I need to know why someone would want to before I can understand
what that means.  [If the design were complete, I could look at the
design for those answers, but it's not.]

> The first packages that need to be ported are glibc, binutils and gcc.

Is there any reason these couldn't be ported before dpkg?  I can manually
build .deb files from a directory tree if I have to.  Manually build
raw elf binaries from sources is quite a bit more work.

Thanks,

-- 
Raul



Reply to: