Re: Re: f-cpu and Debian
On Wed, 9 Oct 2002 22:48, email@example.com wrote:
> my point is : the F-CPU ISA (Instruction Set Architecture)
> can behave more or less like a MIPS or 680x0 (to some
> extent) but, because it is then underused, it would
> be an overkill. I know that one part of the issue is
> with GCC (and it will probably be very long before
> F-CPU is correctly supported) but the other part is
> in the algorithms.
The idea is that when porting to a new CPU you first want to get it to the
stage of booting and compiling it's own code. After that you can worry about
optimising for best performance.
> The network example is a good one :
> F-CPU can do dual-endian data reads and writes, and
> it's probably going to confuse both the compiler
> and the code because the machine endian can be changed
> for each instruction. Ok, F-CPU is relatively fast
> but it's a big waste anyway...
What is the point in dual-endian operation? Sure it can save a few ntohl()
type operations, but I've never heard of that being a bottleneck.
> > I think that for the majority of software (and all the really
> >important stuff) the endianness issues and architecture dependencies have
> >been sorted out.
> It will probably depend on GCC, too, and how it can understand
> the ISA. Now the problem is that neither C or GCC are made
> for handling dual-endian data. maybe a new data characteriser
> must be created, such as in
> unsigned long big_endian IP_address;
Sure, something like that could be done. But I doubt that anyone would want
to do that apart from somewhere in libc and somewhere in the routing/firewall
code in the kernel.
> >Why not __fcpu__?
> well, it is not determined for good, now, so
> we can still discuss about this. If a proposition
> is logical and realistic, it's a good candidate :-)
I believe that the standard operation is to have system defined macros start
with "__", if you follow that convention then you should not have any
problems with conflicts.
> > Most programs don't need such things, there's a lot
> > of work to get it basically booting, we can go back and
> >add F-CPU specific things later.
> yup. However, one of the challenges is that F-CPU must
> be fast from the start. People would be disapointed if
> it doesn't seem competitive enough. One way to be fast
> is to superpipeline the core (and it's done). However,
> if the SW doesn't exploit the features, it will be
> rather disapointing, and this can lead some people to
> think that F-CPU is not efficient...
So hand-code some assembler routines for ssh, and gpg that take advantage of
the SIMD operations. Make a gpg that is faster than a P4 and it'll knock
their socks off!
> > None of the programs that I am currently working on do it.
> >64bit and SIMD should be good for SSL, GPG, bzip2, RAID-5, ray tracing,
> > and DVD/AVI playing.
> i'm not sure for ray-tracing and bzip2 (there is no
> scatter/gather instruction yet) but other applications
> will LOVE it. just think about this as a "super-SSE2" :-)
> we can add : playing chess or network routing.
> A cool "router" could even be built with a network of
> F-CPUs (around a crossbar) and with dedicated Ethernet
> hardware on each chip. To add more ports and power,
> add more F-CPUs. Same goes for other applications.
Sounds like you've taken some ideas from Transputer.
> >What do you mean here by "a simple line in a configuration file"? Are you
> >referring to configuration for a CPU emulator?
> (ooops sorry)
> i meant, in the VHDL source code.
> in fact, this code is pre-processed by m4/make
> and register widths are computed and adjusted in the
> whole F-CPU source tree (comprising currently C, bash
> and VHDL code). Change one file in the m4 definition file,
> type "make" and it builds a whole new complete CPU
> with as wide registers as you want.
> i guess so. but sticking to "compatibility mode"
> is a bit sad because it doesn't force people to think
> more about the platform, the coding practices ....
> think of it a while : x86 never succeeded implanting
> MMX and most other extensions. One reason is that they
> are hard to use and the newer extensions used different
> opcodes. But in the end, C and ia32 remains.
The thing is that when you write an x86 program you will have someone wanting
to run it on a 386 and you have to support that. Optimising code for 6
generations of CPU from Intel and 4 generations from Amd is a serious amount
Initially you'll only have one main variant of FCPU which will be the only
target of optimisation which will reduce these problems.
Also it has to be noted that every capability of the 386 is being fully used,
every capability of the first F cpu will also be fully used...
> OTOH if F-CPU
> tries to force people to rewrite all the code, it will
> never succeed. It is friendly with most data types for
> this reason (among others). But if people don't know that
> there are orthogonal features that can help their code
> behave better, they will not use them and some guys
> will want to remove these features "because it's not used"...
> And the performance will get back to what you'd expect from
> a classical MIPS.
All that's needed is a few important things to be optimsed for F CPU such as
multi-media and encryption and that'll keep people happy.
> - and to make it clearer : the F-CPU sources depend a lot
> on the availability of a VHDL compiler/simulator.
> 2 are currently working and available "for free" on the Net.
Currently we have "savant", "srecord", and "tyvis" in Debian. Do you know of
anything else that is GPL free and worth including?
> now the whole dirty work remains.
> - finishing ALL the execution units
> - defining the memory architecture
> - defining the protection and VM supports
> - implementing them
> - testing them
> you get the picture. However, when it will work,
> it will be close to flawless.
OK. I look forward to it! I've been following the project for a few years,
but unfortunately I don't have the skills needed to seriously contribute.
> >Getting some FCPU related programs in Debian will raise the visibility of
> > your project a lot!
> sure. But F-CPU is only at 30% after 4 years of
> trol^H^H^H^Hdiscussions. The "core" will work roughly
> when implemented at 60 or 70% for simple programs.
I understand. After it gets to that 60% stage more people who have the skills
(or the desire to learn them) will probably be interested in joining.
> >I think that the results for all this can already be found in the build
> > daemon results:
> >The stats page http://buildd.debian.org/stats/ shows the statistics of how
> >many packages build on each platform (over 90% for every platform), and
> >allows querying which packages don't build for each platform.
> >So I think that almost 90% of the packages will have a chance of building
> > for FCPU without any modifications being required.
> \o/ yeah ! \o/
Of course software development being what it is, there's a good chance that
some new bugs will be discovered in some programs (that always happens for a
new port). But it's likely that most of them won't be too difficult to fix.
Many Debian developers are very interested in your project, so finding people
to fix the bugs shouldn't be too difficult.
http://www.coker.com.au/selinux/ My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/ Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/ My home page