[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: multiarch status update



Eduard Bloch <edi@gmx.de> writes:

> #include <hallo.h>
> * Matt Taggart and others [Wed, May 10 2006, 02:00:47AM]:
>
>>   http://wiki.debian.org/multiarch
>
> Looking at all that I have a simple question: do we need a such kind of
> invasive multiarch integration. There only things I have to use which
> are not available for native amd64. For this things, we could create
> "installer" packages which integrate that software and cook the whole
> dependency chain from i386 Debian packages, by relocating the files and
> editing the package attributes. This workaround can be created (or is
> already created) full painless than introducing a whole multiarch
> system. I agree with people argumenting that sometimes using i386
> versions does mean real speedup but I don't believe that this does
> count.
>
> Eduard.

What do you mean with invasive? Multiarch is designed to be
implemented without disrupting the existing archive or mono-arch
systems at any time. Each package can get converted to multiarch by
itself and once a substantial number of packages have done so a
multiarch dpkg/apt/aptitude can be used.

There is some disruption to package sources but a lot of that is
enforcing policy compliance, e.g. split binaries and conffiles out of
library packages.


I've written cross-archive (in the multiarch repository on alioth) to
cook amd64 files to be coinstallable with i386 and am using that on a
number of cluster system at work for a biarch 32/64 environment. Its
predecessor (amd64-archive) is still in use by many people for OOo on
amd64.

But cooking the packages is not 100% successfull and involves a lot of
diversions and alternatives. Every include file gets diverted, every
binary in a library gets an alternative. All cooked packages depend on
their uncooked other architecture version for pre/postinst/rm scripts,
forcing both arch to be installed even if only the cooked one is
needed.

And still some things won't work without the multiarch dirs being used
by any software using dlopen and similar. That includes X locales,
gtk-pixmap, pango to start with.

It also means the cooking has to patch maintainer scripts on the fly
making it fragile with regard to changes in the debs it cooks.


It works for a stable release but for unstable the constant stream of
changes needed in the cooking script would be very disruptive for
users.

It also is disruptive to building packages. Build-Depends will only
work for the native arch and not for the cooked packages and
building for the cooked arch will give precooked Depends (I do cook
shlibs files) so they are invalid for uploads.

The one big reason for multiarch over biarch (manual or cooked) always
has been to preserve Build-Depends and Depends lines in nearly all
cases.

MfG
        Goswin



Reply to: