On Thu, Mar 28, 2002 at 09:22:20PM -0500, Michael Stone wrote: > On Thu, Mar 28, 2002 at 09:49:43PM +0100, Marcus Brinkmann wrote: > > where standardization is little and diversity high. And in general, I have > > never seen a case where "write once, throw away, write again" is a good > > programming paradigm. > > It's iterative development in the flesh. First, I think you overstate by > saying "throw away." Even when code is rewritten there is a learning > process involved and a set of basic techniques and constraints that can > be carried from prior revisions. Rewriting code can be good, but I don't think you really learn much by rewriting thousands of drivers again and again. Also most of the time it's rewriting by other people, they don't learn much about it. > The sad truth is that new requirements > come up and this often requires rewriting. It's not an easy thing to > come up with a framework that's general enough to deal with the unknown > and still has good performance. I think it's possible, if you just design it carefully enough. You've to think about extensibility and flexibility from the start. You also need to be sure there is a way to change things while still being backwards compatible. In the Hurd for example, a server can impelement different interfaces. That can be totally different ones or just a new and an old version of it. If nothing uses the old version interface anymore, it could be just dropped. > E.g., one set of driver updates in the > linux kernel was for pci hot plugging. When the kernel was first written > that hardware just didn't exist, so it would be hard to make a good > design for it. That sort of thing is going to come up, and that sort of > driver change is inevitable. Linus said Linux was never designed. And that's true, Linux was just a toy project to learn the i386 in the beginning. AFAIK Linux was never written to allow such updates. > Another example of change is the zero-copy > patches. Those are sometimes intrusive (not at all portable outside a > particular kernel), but absolutely critical for performance. The hurd > guys aren't worrying about that just yet, are they? Not yet, there are more imporant things to do. When those are finished, we are going to care about speed. We will switch to L4, profile all servers, etc. I think when that's done the perfomance difference between the Hurd and Linux won't be that big as it is now. > And sometimes > there's just more than one way to do something and the only way to see > how it's going to work and fit together with other pieces and be > maintainable, stable, and fast over the long term is to try it. The > willingness of the linux kernel developers to try new ideas and dump > them if they don't work is a source of strength in their development > model. "willingness to try new ideas"? I think Linus is pretty conservative with including new things. He has to, because you can't do otherwise in a monolithic kernel. You can't just include everything. > I think it's pure arrogance if not stupendous self delusion for > the hurd developers to poo-poo that model (calling it a "fairly bad > kernel" and "a bad joke") when their model frankly has so little to show > for itself--we've heard a lot of "when our grand design is finally > finished..." from the hurd folks over the years. I'll be damned if I > want to see someone criticizing a working project in favor of one that > can't meet the requirements the first project already fulfills. Depends on what your requirements are. The Hurd and Linux are written with different goals. Linux is a pretty good kernel, for the thing it's written for. For example Linux was never designed to be portable to other architectures. It was just designed to be a UNIX-like kernel which takes advantage of the 386. It was designed to be fast and stable. And yes they succeeded in that goal. In that way Linux did a pretty good job. But with portability it doesn't score well. If I'm right all other archs Linux runs on is a fork of Linux getting merged back from time to time. Code reuse never was a design goal Linux. That's why its so difficult to do so. If you look what Linux is today and what its design goals were, sure they really did a very good job. Better than anyone else I think, much better than the Hurd. But they might some tradeoffs. IMHO those tradeoffs were good short-term, but long-term they are bad. When you have hundreds of developers like Linux has now, you want code reuse, you want a moduler design so people can develop independent of each other. You don't want that everybody has to do the same thing over and over again. And IMHO Linux doesn't score well in this area, simply because it was never made to allow this. Jeroen Dekkers -- Jabber supporter - http://www.jabber.org Jabber ID: jdekkers@jabber.org Debian GNU supporter - http://www.debian.org http://www.gnu.org IRC: jeroen@openprojects
Attachment:
pgpKQ8zWDiQUG.pgp
Description: PGP signature