[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: "Cross-compiling for Win32" script vs. rosBE? Debian Win32 port alive?

On Fri, Jun 22, 2007 at 01:05:17AM -0500, Drew Scott Daniels wrote:
> Several of my questions were meant to try to determine if your script
> would be useful to ReactOS and/or the debian-win32 porting projects.

If you ask this way: Yes, it might be usable. You'd have to add DLL
support (which I intentionally mostly left off). But then you could
use it for bootstrapping.

Also, you could take it as template or "first idea" for debian/rules

> Your script could be useful for building for XP embedded. My interest in
> ReactOS is for an embedded system too (and a few other things).

That sounds interesting, although I personally would go away from
Windows *especially* on embedded devices. ;-)

Why do you need XP on those devices?

> > > How does your script for Cross-compiling for Win32 compare with rosBE?
> Well most ReactOS (ros) developers build ros on Linux using mingw. I was
> attempting to refer to their build environment.

What exact is is their Linux MinGW build environment? Does everyone
create his/her own cross compiling environment?

All win32 cross compiling developers I know do it the same way:

1) Get the cross compiler from your distribution or use a script like

2) Cross compile all libraries you need, making some ugly fixes and/or
   move some files until it works somehow, without documenting it.

3) Cross compile the own application and publish it. This step might
   be documented or at least very easy.

Maybe the ReactOS developer work differently, but except for some
superfluous cross-compiling howtos, I never found a real documentation
of how to cross compile a bunch of libs with MinGW.

Even the GnuWin32 project, the most promising one, doesn't have a
transparent patch and build process. In other words: The source
packages are already patched, and there are no build scripts. Each
GnuWin32 maintainer compiles his packages on his own. The knowledge
is hidden in the community.

In contrast, my script makes the whole patch and build process
transparent. Not with words, but with exact, reproducable, easy to
understand shell commands. No "unpack the sources", but "tar xfvz ...".
In particular, the exact environment variables, configure parameters
and make parameters can be found in my script. And they are not
obfuscated by a bunch of macros, variables or if/then constructs.

In this respect my script is very similar to Debian and RPM source
packages. The common goal is to have reproduceable builds from the
upstream source.

> I now understand that you meant the libraries in your environment are
> created for static linking. I thought one using your environment would
> be forced to create static libraries somehow,

No, that's totally wrong.

In my environment you can create DLLs as well. However, those DLLs will
have the other libraries "inside". This, however, is often what you
want, because this way you have to ship just one DLL instead of ten.

It's also perfectly possible to create an .exe file that includes some
libraries statically, and also links to some DLLs. This is useful if
you need to link e.g. against a proprietary library.

However, it's sometimes hard to force one package to use the static
instead of the dynamic variant of a library. Such mistakes are subtle
and sometimes very hard to find.

So if you want to use a library statically, it's a good idea to just
bulld the static variant, not the dynamic one. This way, other packages
simply *can't* use the wrong variant. ;-)  And that's the reason I've
done it that way.

> and I'm not sure that's
> true. I'm not sure what the --disable-shared configure option does for
> binutils.

This has nothing to do with support for DLLs. The binutils are programs
like ar, as, nm, strip, dlltool, etc.

They run natively on your system and are split into some binaries (ELF
binaries) and some libraries (*.so). But as noone will use their libraries
except theimselves, I thought it would be a good idea not to build their
shared libraries.

This doesn't affect your cross compiling environment in any way. It has
nothing to do with the ability to create DLLs or to link against them,
which will always be possible!

> > 3) no need to compile shared libraries (DLL files)
> > 
> >     When you port a big application to win32, you usually want to
> >     present one package for the windows users, that includes everything
> >     they need to run it. The simplest way to achieve that goal is to
> >     statically link all needed libraries. So you just need e.g.
> >     "libcurl.a". You don't need to build "libcurl.dll" and "libcurl.dll.a".
> I maintain several applications where the use of dlls has been
> considered to be a large advantage. Sharing code and reducing image size
> are goals which suggest building dll's...

For a software distribution this is correct. However, if you just want
to ship some uncorrelated stand-alone applications, things are
different. In that case you don't want the user to install several
packages before he can use yours.

I'm curious: What applications to you ship?

If you are shipping e.g. many binaries that use a common code base
it might be a good idea to put everything they have in common statically
into one library. Then build this library as DLL, and link your binaries
against it. This way, all binaries will be very small, and you have only
one (big) DLL to ship, instead of 10 or 20.

However, these are just suggestions. It totally depends on the kind
of applications or software distribution you are working on.

> [...]
> >     If you don't, i.e. you install them system-wide to "Windows/System",
> >     you'd have to take care of different DLL versions, which isn't not
> >     possible. Windows doesn't have a suitable package system, so you're
> >     going to have a lot of trouble, the so-called "DLL hell".
> > 
> "DLL hell" isn't the same as it used to be. MS has some papers about it.
> As to dependency installation, well there's a few good ways to deal with
> that.

Is there? Even when you want backwards compatibility to Win98, WinME,

Anyway: I'm very intrested in the way you manage your packages. It
sounds interesting.

> [...]
> >     http://wiki.njh.eu/Cross_Compiling_for_Win32
> > 
> > > Does this mean the Debian win32 port is alive?
> > 
> > No, but it may help them. I don't belong to Debian-win32, and in my
> > opinion, it's nearly dead. For the above reasons, I also think that's
> > the wrong way if you just want to port an application to win32.
> > 
> Your wiki page states it's the right long term way to do it though.

Well, that's not entirely correct. It's the right long term way to
use the Debian sources for creating cross-compiling win32 binaries.

However, I'm not sure whether it's a good way to build binary packages
for a "win32" architecture an then convert them via dpkg-cross.

There *could* be a Debian-win32 distribution, but there isn't. Those
compatibility patches *could* be included in the Debian source
packages, but their maintainers refuse them for a good reason: Many
cludges and different filenames. They don't want to bloat their
debian/rules files and they don't want to have different variants of

So one has to maintain an own debian/rules or at least some patches
for the debian/ directory that won't ever be included in Debian. One
has to build every package as "native win32" that aren't usable to
anyone (because there's no real Debian-win32 distribution). In the
end, one converts all of them via dpkg-cross which leaves packages
that are only useful for Debian developers.

Such a system is not only hard to create but also hard to maintain!

It would be different if one would just need some simple compatibility-
patches to the debian/rules. It would be different if the created
win32-arch-binaries were usable for some Windows distribution. It would
be different if Windows had a good package system with handles dependency
conflicts and the like. If would be different if most of the win32 cross
developer were using Debian.

But all that isn't the case. So it's just a lot of extra-work without
any benefits.

Maybe the "port" way (as found in FreeBSD and Gentoo) is more suitable
to the project than the "Debian way".

I think the biggest difference between my "mingw cross env" and
MSYS/Cygwin/Debian is that mine is a "library distribution", not a
"software distribution". It reuses many of the already present tools
(tar, sed, bash, perl, python, ...) and creates only a minimal set
of extra programs themself, namely: GCC, binutils, pkg-config.

Well, one could reuse a system-wide installed pkg-config, but that
would mean more environment variables. Currently, I just need to
adjust the PATH. If you install the cross-compiling environment in
/usr, you don't even need that. As I have to specify the correct
path to sdl-config, gdlib-config, ..., I found it better to do it
with pkg-config the same way, thus making the script more consistent,
easier to understand, and One Less Potential Subtle Bug[tm].

> > The main problem of projects like Debian-win32, MSYS and Cygwin is
> > that they try to solve a big problem with a small community.
> How big is kfreebsd? 3 very active developers are likely what's got 
> it in the shape it's in now. It's being considered by some for Lenny.

3 active developers, all FreeBSD users? Well, I'd be happy to have
just a *second* helping person, no matter what unix system he/she
uses. If there were 3 Debian developers working on a mingw cross
compiling environment, the situation would be totally different. But
obviously, there isn't.

> Cygwin's community is fairly large and was/is commercially backed.

Sorry, Cygwin was a bad example.

> Your script is nice. I like its simplicity.

Me too. ;-)

That's the whole reason I wrote it. It's even simpler that most
of the scripts that just build a cross compiler. And the script is
a big contrast to the Debian way, and has therefore some drawbacks.

> It also looks like it'll work on other distributions.

It does. Mostly. ;-)

On Gentoo and SuSE, it seems to work flawlessly. For FreeBSD I
need some minor changes that I plan to incorporate in the next

> It would be nice if it could
> have a minimal build environment option that'd allow more dll usage (in
> the dynamic link sense), and building dlls.

As already said, DLL usage is always possible.

Build DLLs is somewhat more subtle. Sometimes you want to statically
link some Libraries into another DLL.

> > > I never considered using slind for a new architecture like win32...
> > > Is the libc msvcrt? Can the libraries be dynamically linked?
> > 
> > Please repost this question to the debian-win32 list.
> >
> The msvcrt question is specific to your script. debian-win32 may go 
> either way. Your wiki seems to indicate it's msvc based.

Yes, it is.

> I guess that
> makes sense as porting glibc to Windows is probably still a challenge
> (especially since upstream probably won't easily take it).

As far as I know, the advantage isn't big enough. If you want to create
a win32 software distribution, it might be. But if you just want to
cross compile some bigger applications, msvc-based cross compiler is
definitely the way to go.

> Libraries being dynamically linked is possible with mingw. I haven't
> gone through your script in detail so I wasn't sure if this option
> wasn't available. It seems like you disable this in the binutils build?
> [...]

No, I didn't. See above. In fact, I was already creating some DLLs
and linking to some DLLs using that.

> You also commented on list about file naming convention problems, and
> other problems. Can you be more specific?

The simplest example: libSDL. On unix, you have:



On win32, you have:



It's not a big difference. But it *is* a difference, in contrast to
all POSIX systems. And that's the annoying fact.

> SFU/Internix got around some
> of these. Also NTFS may have some features that might help... 

That sounds interesting.

> How close do your compilation sections in your script match
> debian/rules for packages already in debian?

They don't. They aren't intended to be. They install directly
into the destination directory, without any packing or DESTDIR=...
or similar. They also aren't explicitly split into a configure,
build, and install phase.

In short, they are less flexible than debian/rules, but incredibly
simple and more explicit.

> Presumably it would be easy
> to convert your script to pull source packages from debian instead of
> the upstream sources.

With some minor complications: Yes.

Don't forget that many Debian source packages have a .tar.gz which
itself contains the upstream .tar.gz with some patches. You still
have the treat every Debian source package separately. Either by
patching every debian/rules, debian/*.files, or by ignoring the
debian/ directory and build them in the script.

> Ideally much of this script could be replaced by
> apt-get/aptitude and Debian build scripts.

Yes, that's also right. However, if you want to split the parts
of the script into debian/rules files for each package, you'd
go back to where I started from.

However, if you see a good way to do this, don't hesitate to try
out. Start with some simple packages like libpng, cURL or SDL.

I'd help you and give some advice. But don't expect me to do all
the work. I tried, I got no help [1], I choke on the amount of
work and useless complexity, and I failed. That's the reason I
wrote that simple script instead. Partly because it's practical,
but also to demonstrate how easy all this could have been...

With my script, I didn't even need to ask for help, I just got it
offered from some people. I think, this has several reasons:

1) The script can compile a lot of libraries (21). In the past,
   I struggled with only 5 or 6.

    (i.e. it is of more use, and thus motivates more people to try it)

2) the script is easier to understand than all the debian/rules
   one had to adapt.

    (i.e. the initial burden is reduced, because one doesn't have
          to learn the Debian build process, which motivates more
          people to help me)

3) the script is also usable on non-debian platforms, i.e. other
   Linux distributions and partly BSD.

    (i.e. it is addresses a bigger community)

So as a community experiment, the script was very successful
compared to the "Debian way". I initially wanted to do it the
Debian way exactly *because* of the community. I wanted to
benefit from a community, and that the community benefits from
my efforts. With Debian/win32 that wasn't possible. With that
script it is.



[1] That's not entirely true. I got some very good advise from
this list and especially from Ron Lee, the Debian maintainer of
mingw32. Many thanks! However, I didn't get any help in the
practical sense, such as adapting the debian/rules for more
packages, etc. I didn't even find enough people to justify an
addition to dpkg-architecture's ostable.

Volker Grabsch
NotJustHosting GbR

Reply to: