the advantages of supporting many architectures.
* Uoti Urpala <email@example.com> [120227 22:02]:
> Bernhard R. Link wrote:
> > While there might be some problems originating from some architecture,
> > but most problems you will see and people claim to be "problems specific
> > to fringe architectures" are actual bugs in the program you just do not
> > *yet* see on your usual pet architectures. And some more because the
> > program is just doing some very narrowing assumptions.
> Yes, such bugs do exist. However, I think the benefit of testing on
> other architectures to uncover such bugs has been exaggerated.
I think the benefit was that we have a amd64 port almost instantly.
I guess without it there would have been years before most people would
have dared to use 64 bit userspace. (Considering how long people used
32 bit userspace to still have flash and stuff running easily, imagine
what would have happend if major parts of the free software stack would
still have had that problems).
> Many of
> the problems that end up taking the most time are toolchain issues
> specific to the architecture.
While those are the most annoying ones, in my experience they are the
> Typical free software projects already
> have multiple known issues that actually affect people even on popular
> architectures; ability to find one more thing that's in principle broken
> is not particularly valuable.
Depends on what free software projects you use. Usually the hard part
about getting them bug-free is finding the bugs, not fixing them.
Every bug hitting the maintainer directly and now usually does not even
exist in the release.
> > Imagine how long amd64 would have taken, if people had not had years
> > to fix all those 64 bit bugs on alpha first (Which never really got
> > a mainstream architecture and where it was used was quite server-only.
> > Who would have guessed that fixing games to run there would have had
> > benefits in a so soon future?)
> I'm not sure exactly how long more AMD64 support would have taken
> without Alpha, but I think it would have become supported reasonably
> fast in any case, and likely with substantially less overall effort than
> by fixing issues as they come up through Debian Alpha builds. "First
> upstream developer of a game gets an AMD64 machine and makes the game
> run on it" is just inherently a lot more efficient than "Debian
> maintainer forwards reports about game not working on Alpha".
Without the whole software stack mostly ready to be used with 64 bit,
how many upstreams would run their 64 bit machines with a 64 bit
userspace? And how often do upstreams buy new computers?
And most of the time it is "Debian maintainer forwards patch to
upstream", whether that patch is from a user, a maintainer or a porter.
> If you want to help the development of upstream projects in general,
> there's an obvious thing that could use more resources: make sure that
> the latest upstream code is always available in Debian unstable (or if
> it's likely to cause breakage, at least in experimental), and don't let
> the introduction of new upstream versions in unstable stop around
Having satisfied users also helps upstream. Thus preparing a tested
release users can actually use helps upstream more than just someone
looking at their latest versions.
> Getting more feedback about changes quickly is a lot more
> important than testing on unusual architectures.
Those are different eyes. The big advantage of different architectures
is the variety. Bugs in the code causing the compilers only generating
working programs by chance for example is something that is better
caught on some "unusual" architecture first. There only a few users are
effected. If the bug is only caught once there is a newer compiler,
then it effects far more people. (And if there was no distribution out
there fixing those bugs in upstream software then more upstream projects
would fail with newer compilers, making it impossible in the long run
to switch to newer compiler versions).
Bernhard R. Link