[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [buildd] Etch?

On Wed, Aug 09, 2006 at 11:35:21AM +0200, Roman Zippel wrote:
> Hi,
> On Fri, 4 Aug 2006, Wouter Verhelst wrote:
> > > Last time I checked the cas instruction is also not available, which
> > > makes multithreaded code interesting.
> > 
> > This is an issue I didn't know about yet. It's indeed missing, but I
> > didn't know that these instructions were that important.
> > 
> > Can you give me an example of how it's actually used in multithreading?
> I know it's used in libstdc++, possibly other places too. It's the only 
> way to implement signal safe primitives without kernel traps.

I see.

> In this case I'm considering adding a special page to the user process 
> (similiar to the x86 vsyscall stuff). This would make it possible to 
> support WildFire (locked and dma accesses don't like each other) and 
> similiar broken hardware.


I'm not even remotely near hacking libstdc++, but when I am, I'll
probably give this a closer look. Thanks for pointing this out.

> > > A hybrid doesn't automatically fix the toolchain problems and
> > > doesn't really make the toolchain maintainance any easier, if it
> > > worked for CF it would also work for m68k.
> > 
> > Of course not; I don't think I ever implied that.
> > 
> > However, it would solve several other outstanding problems which we have
> > and that have to do with the age of the hardware. This is certainly
> > important.
> What problems would that be? Toolchain problems don't solve itself and 
> the build speed doesn't seem to the biggest problem.
> The problem is that these users are not really visible, could Debian/CF 
> meet the release requirements on its own?
> I'd really prefer to keep this at least initially separate and worry about 
> a possible merge later.

You're not honestly suggesting that we try getting a separate port for
Debian/CF, are you?

Sure, the build speed is not the biggest problem. Not for now, at least.
But that's only because we have bigger problems, not because the build
speed isn't a problem at all.

The original Vancouver proposal included a clause that no port should
require more than two machines to be able to keep up. This was not a
joke; they meant it. This requirement is right now not in force only
because we could not get to an agreement in Helsinki; I expect it to
surface again, given time, if we don't find ourselves a way of
increasing the average speed per buildd.

Supporting hardware that is no longer being developed requires a lot of
resources; not just from Debian/m68k porters, also from other people.
The security team has said that it doesn't matter for them whether they
need to support 2 ports or 20, as long as the buildd infrastucture
works; and while we can do that, the fact remains that requiring 26
hours to build mozilla or firefox will keep all of Debian waiting for us
on a firefox security update. To name but one example; there are other
examples of software that takes very long to build on our port, but do
need to be built for security updates.

Supporting a Debian port requires processing time, disk space, and
bandwidth on ftp-master.debian.org. Adding the AMD64 port took a few
years simply because ftp-master could not cope with the added load of
another port on those three levels. In that light, asking for a second
m68k-ish port because "not doing this will make 10-year-old hardware run
even slower!" isn't going to be much of an argument, especially not if
that second port would block some other port that /would/ require a
separate compile target. Examples of such ports do exist, and there are
a number of people waiting at the gates to be allowed in.
Debian/FreeBSD, Debian/armeb, there's even talk of a port to Minix.

Supporting a Debian port requires a lot of work from a lot of people.
Most of that work got put on our shoulders in Vancouver, but not all
of it; it's impossible to do that. The Release Managers all have a lot
of work to keep an eye on all the ports; this includes the d-i release
managers, the stable release managers, and the "regular" release
managers that take care of etch and sid.

Getting a second port to get debian to work on ColdFire is totally out
of the question. And besides the point, too.

Getting Debian/68k to run on ColdFire _will_ solve many problems. It
will not magically fix the toolchain, that much is clear; however, it
will get us hardware that is much beefier than what we have now, and
this is much needed:

* We had a buildd park of 12 machines, last I checked; if more than a
  few of those go down, we start lagging behind again. Due to the age of
  much of our machines, this happens more often than is the case for,
  say, amd64. Getting newer and more powerful hardware will mean that we
  will not have broken hardware as often, and that we may have more
  surplus capacity than we do now. While we can just add new buildd
  machines now, too, this isn't the most ideal solution, since adding a
  new buildd host increases the load on buildd maintainers fairly
  importantly; the cost/benefit ratio is much better on ColdFire.
* If major updates are in order for large sets of packages, wanna-build
  will queue them in semi-random order, which isn't the most efficient.
  Since the core libraries take some time to build, the mess remains for
  a while. It takes quite a bit of work to fix such a mess; if those
  core libraries are built faster, then the mess is smaller and the
  number of failed packages and packages in dep-wait will be much,
  _much_ smaller.
* More importantly, currently half of our buildd park are macintoshes
  that will not work with 2.6 kernels. 2.2 and 2.4 are scheduled to be
  removed from unstable, a move which will likely occur this month,
  maybe even this week. It will not take very long before glibc will
  drop support for 2.2 and 2.4 kernels, mainly because the glibc
  maintainers were amongst those asking for this change.
  While we can theoretically keep running our macs on 2.2 kernels and
  have them build packages, they will fail an ever-increasing number of
  things, much like the xargs issue that we currently see on a fair
  number of builds on 2.2 kernels. If our macs cannot run 2.6, we will
  need to find replacement for these somehow. We do not have the surplus
  buildd power to just forget about the 2.2-running machines and
  continue with those machines that do run 2.6.
* Having an architecture that is actively being used and sold means that
  corporate sponsorship, when required, is a possibility. The same is
  hardly true for the current m68k port.
  I'm not saying that we absolutely do need corporate sponsorship, but
  there are times when such sponsorship can help a port tremendously.

In addition, this work that I'm doing on the toolchain in support of
ColdFire is teaching me a lot about the toolchain itself. Just that,
ignoring all of the above, can only be a good thing, right? ;-)

That being said, the coldfire stuff is not currently being merged into
Debian/68k proper; we are (well, I am) doing this in a subversion
repostitory on alioth.debian.org, until the hybrid stuff is mature
enough to try and compile the archive with it. Only then will we think
of modifying whatever is left of Debian/68k.

So, yeah, it'll be at least a few years before Debian/68k will run on
ColdFire hardware. Certainly etch (which is planned for December) isn't
going to.

> > Besides, there are actually amiga upgrades being sold based on ColdFire
> > processors. See http://elbox.com/faq_dragon.html and
> > http://elbox.com/news_04_12_17.html. That being said, I don't know how
> > popular those are, or indeed even if they actually sold any of those
> > already.
> Hmm, the oldest news is about one year old and I couldn't find anything in 
> the online shop. Even for their PCI boards we currently have no kernel 
> support.

Okay, so that's a dead end then, probably.

Fun will now commence
  -- Seven Of Nine, "Ashes to Ashes", stardate 53679.4

Reply to: