Re: [buildd] Etch?
On Fri, Aug 04, 2006 at 12:24:21PM +0200, Roman Zippel wrote:
> While it's possible to avoid these instructions, it would mean possibly
> very larger code and thus even slower code.
Indeed. However, I do not feel that the impact will be unbearably large.
So far, I have found only two cases where the documentation documents
different behaviour for a given opcode on ColdFire vs 68k:
* Moving data from FPU registers to memory with FMOVEM will overwrite 10
bytes per register on classic 68k, but only 8 per register on
ColdFire, due to the differences in FPU register length. This is a
problem if you try to access the data in memory after pulling it out
with FMOVEM, but it is not if you use it to store the state of your
registers at the beginning of a function, so that you can restore the
state at the end of the same. I presume that that's what FMOVEM was
intended for anyway, so I do not consider this to be much of a
* Using address register indirect with predecrement or postincrement mode
on the stack pointer (A7) in byte context will increment resp.
decrement the stack pointer with 2 bytes on classic 68k, but with 1
byte on the ColdFire. Both still need to be aligned on two bytes,
however. As a result, this addressing mode should be avoided; but I do
not think that it is used very often.
Other than that, there are a number of opcodes that have been removed
(those relating to the BCD data format, for instance, and some others),
and most of those that remain have lost a number of addressing modes as
well, which I guess you already knew. In some cases, this may indeed
mean that you have to do things with two opcodes instead of one. The
most problematic point where this is true is in PC-relative jumping;
this must be done with two instructions on ColdFire, since you can not
do postindex or preindex addressing modes for the JMP opcode there
(which is what the classic 68k PLT implementation does).
I agree that the loss of addressing modes and of opcodes may make the
object code larger; I am not convinced, however, that this will result
in problematic differences, especially not if we include optimized C
libraries that can be used by 2.6 kernels on different machines (and the
intent is to do this, very much in the same way that there is now a
libc6-686 package on the i386 architecture).
Also, kernel-mode instructions can obviously use the full potential of
the processor on which they are running.
> Currently we at least assume an 68020 cpu, but for CF support we had
> to go back to the 68000 and then there are still a few instructions
> missing (mostly byte/word operations). To be honest I'm not looking
> forward to this prospect.
First, it's not really true that you must assume a 68000 CPU. It is
may be true that the number of available addressing modes per
instruction is closer to the 68000 than it is to the 68020 or the 68040;
but if you have a look at the instructions that exist, you will see that
the ColdFire is far more advanced than the original 68000. That being
said, it is certainly true that hybrid code will not be the most optimal
As I see it, however, we have two alternatives to a hybrid architecture:
the first is that we do nothing, keep things as they are, and have the
port face more and more moments like this one as time progresses;
eventually, there will be a point where we will simply have to give up.
Having the port run on ColdFire hardware as well will slow this
evolution down; if not indeterminally, then at the very least by a few
The second alternative would be to drop classic 68k hardware completely
and to focus on ColdFire hardware _only_. I'm sure that's not what you
I agree that having to slow down the average execution speed on the
hardware we currently support is not the best option if that is indeed
what will happen, but I think it's far better than the alternatives.
In other words, I don't like it much more than you do, but I think it's
the best option we have.
Fun will now commence
-- Seven Of Nine, "Ashes to Ashes", stardate 53679.4