[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [OT] Interview with Con Kolivas on Linux failures

On Tue, Jul 24, 2007 at 01:46:29PM -0500, Kent West wrote:
> David Brodbeck wrote:


>> To me it always smacked a little of "me-too-ism", too ... the GNU folks 
>> felt Linux wasn't GNU-ish enough, so they had to go write their own 
>> kernel.
> It's my understanding that the Hurd pre-dates Linux; it's just that once 
> Linux came along, the development on it moved at a much faster pace than on 
> the Hurd, and Debian was ported to run on it while the Hurd project 
> languished.
> For those not up on the project, as I understand things...
> Debian is an entire OS that can (at least theoretically) run on top of a 
> number of different kernels. It originally was to run on the GNU Mach 
> kernel as part of the Hurd project, but then Linux came along and outpaced 
> Hurd development, so Linux became the new underlying kernel for mainstream 
> Debian.
> The big difference between Linux and the GNU Mach kernel is that with 
> Linux, many things (hardware drivers, file system drivers, etc) are 
> integrated into the kernel, whereas with a micro-kernel architecture like 
> GNU Mach, the kernel is just a very small core piece of code, and then the 
> drivers, etc are attached as "servers" (sort of like inserting a module 
> into the Linux kernel, but different). These servers are more modular than 
> Linux kernel modules, and can be attached by normal users rather than 
> requiring admin access, because the modularity prevents them from tromping 
> on each other.
> Of course, I probably don't really understand things ....

I think your summary is pretty accurate, general way, of describing

The modularity has some positives: a failure in one module will
not bring down the whole system. of course this is pretty rare in
linux these days too, but is certainly possible. It also provides some
serious security bonuses because a security failure in one
user-inserted module does not mean that the rest of the system is
compromised they way would be in the monolithic kernel model. I guess
some of these ideas are working their way into linux with the
inclusion of user-space drivers. 

There are also negatives: there is overhead in the communication
between the modules that might not be there in the monolithic
model. And, I suppose, having the system remain up when all the
modules for the input methods go down is only of minor convenience,
but I really don't know what I'm talking about here. 

What would be interesting, from a ck perspective, is what the state
ofthe scheduler is in Hurd. Is the scheduler a seperate module (there
must be some other name for them...) like everything else? If so, can
you then plug in different schedulers for different purposes. ck wants
a responsive snappy desktop and is obviously willing to sacrifice
other things to acheive that. So he could develop a scheduler for that
purpose. Meanwhile others may want a scheduler with other ideas of
what is a priority. maybe they need a system that will prioritise
already underway tasks over new ones (don't knw what this is called)
and could fire-up the appropriate scheduler. 

A parallel conversation on /. (I know i know, its an addiction) was
discussing implementation of different lines for MS again, splitting
between a desktop-user oriented release and a more stable business
release. Who knows what that all means, but its an intriguing parallel
to the ck situation. He wanted a better desktop while linux is
pushing for more server oriented priorities. Maybe Hurd can actually
work out for both parties by simple implementation of different
low-level modules: one set of scheduler/IO/"interactivity" modules for
desktop versus another set for various server funcitons, or heavy
computing uses, whatever. Even better would be a kernel that could
switch modes on the fly based on what sorts of tasks were running at
the time...

just rambling aloud.


Attachment: signature.asc
Description: Digital signature

Reply to: