Re: Confusion about where to go in Hurd
-----BEGIN PGP SIGNED MESSAGE-----
Very long message :-). Anyway, I'll reply to the best of my ability
First, I think its very important to clarify the terms Hurd and mach
Mach - the underlying microkernel used by all versions of Hurd
Hurd - the userland translators which snap into the kernel providing
userland services, and (in theory) are microkernel independent
Hurdng - the project of porting hurd translators to another microkernel
beside mach such as L4.
In addition, I believe this email addresses quite a few things about hurd,
so I CC'ed it to gnu-system-discuss, and bugs-hurd
On Thu, 26 Jul 2007, Anders Breindahl wrote:
I'll jump into this, as this is a major source of frustration and
confusion for me. And seemingly also for others. Perhaps this could
Firstly, I'm replying to mbanck's post. Then I'm abusing the thread,
Because it bears the right subject for my post anyhow.
I hope you'll bear with me, brittle reader, since I at least felt that I
was making a point when I wrote this. I made some paragraphs for easing
the ingestion of this mail.
On 200707252235, Michael Banck wrote:
Debian will continue the Hurd/Mach port until a viable alternative
exists, which will likely take a few years, if there will ever be one.
At least it exists, although barely-moving and support- and featurewise
horribly behind. But its existence keeps up hopes.
Mach itself is actually pretty much feature complete and fairly stable. An
older version (mach3 vs. mach4) provides the kernel of Mac OS X. The main
reason parts of Hurd are slow and such is that code in the translators
(such as the pager code in ext2fs) haven't been optimized. I've only had
one true kernel panic in serveral months of running Hurd in a VM (on an
experimental kernel); When the system seems to freeze completely, it
usually is a deadlock or translator issue (although I haven't looked into
it THAT much).
As nobody knows how Hurd/NG will look like, a transition plan cannot be
made at this point, and whether or not a Hurd/Mach installation can be
migrated over is unknown.
How unknown or unsure is it whether current development of the Hurd will
be usable on a potential Hurdng?
- Will current interfaces suffice in e.g. translators or device
drivers, or will rewrites become necessary (How well did hardware work
in Marcus' L4-attempts? And would that be forewarning for potential
problems in other next generation microkernels?)
- What about Hurd-specific glibc-specials? Are they of
Mach-workarounding nature, or more of a non-Linux-dependency'ish
Obviously, the work on Mach won't be usable. That includes the device
drivers (and glue code), I suppose.
In regards to translators, by implementing the message interface (and
porting the mig compiler to the kernel), they should be pretty much a drop
in replacement, although I believe most translators do have some mach
specific code (or use the mach IPC directly, but that itself wouldn't be
difficult to fix).
Device drivers on the other hand are a different kettle of fish. Mach's
drivers at the moment are a port of the Linux 2.0 driver code (with ~1000
lines of glue code). That being said, device drivers are handled by the
microkernel under Hurd. The best way to explain this is that Hurd ran on
L4 and mach, and L4 supported device A and mach supported A and B, the
translators would be able to use whatever the underlying kernel supported.
Do you plan to help the Hurd development? Unless you are a microkernel
researcher, you can ignore Hurd/NG for the time being.
I have come to understand that key individuals within the Hurd community
are less than satisfied with patching up Mach. And with good reason. But
also that there is a despair of heirs to Mach. Nothing adequate with a
usable license seems available, right?
The problem with mach is that it is a first-generation microkernel. mach3
had problems with translators taking a very long time due to performance
and IPC issues, but mach4 (which was the basis of GNU mach) worked around
the issue with providing co-location (a mechanism where translators go
right into kernel space) and shuffling which, while not completely
solving the problem, they reduce it considerably to the point its more or
less a non-issue in current releases.
The true solution to the problem would be making the kernel even smaller;
L4 defines 7 functions, while mach3 defines 130 (I think). By making
everything be in the userland (which is what L4 does), the overhead can
easily be managed. Wikipedia has a very good article under this under
While the L4 effort is dropped, Coyotos seemingly is undergoing heavy
development. It has been mentioned as a viable choice for
next-generation Hurd microkernel before. How are current opinions on
I can't tell, but if I am not mistaken, it has been (is?) a design goal
of the Hurd to be microkernel-independent. In these
microkernel-dissatisfaction times, would it be the right time to
consider refactoring towards microkernel-independence, and let Mach live
on as a testbed? (In OO-lingo, a Standard implementation?)
Hurd itself SHOULD be microkernel-independent. I know some progress was
made getting hurd working on L4, but I never looked to see just how far
that got. It would be quite nice if we could achieve this by getting hurd
ported to another microkernel (possibly's Apple's xnu if we wanted to test
on another mach-based kernel, L4, or Coyotos).
Mach itself I doubt will die anytime soon. It's stable, (more or less)
feature complete, and it exists.
By the way, from the name, a microkernel seems to be something easy to
write. Could somebody please outline to me (and others with my knowledge
of the field) why this isn't the case? I suspect that it isn't so much
about actually writing the thing, but more about knowing what it is that
one wants to write?
The problem steams from the fact that very few programmers have the low
level knowledge to write a program that can run from boot to provide a
real userland. I have experience writing assembly for embedded processors,
and writing a kernel is not an easy task (I've never tried it; the most I
did was write single-use programs for use on said board).
> -- end of reply, start of thread-abuse -- >
The Hurd's place in the future
In recent news, Linus has made a move toward a userspace driver
environment, which someone at coyotos-dev commented was to be ``hitting
the monolithic complexity wall''. In other words, they've come to (or
are at the verge of) preferring maintainability over speed.
(Disregarding Linus' own possible motives, here).
This will -- if it is allowed to evolve -- cause Linux to slow down, as
more and more speed will get lost in Linux' reportedly slow IPC.
Which is where the Hurd comes in, being slow already, but at least
having chosen the prudent (in the ``time has shown'' kind of way) method
of implementing and maintaining drivers and other kernel features. (I'm
guessing that) from this point on, multiprocessing and decoupled
hardware will punish monolithic kernels and lessen the punishment on
decoupled kernels and software. And wishes for maintainability,
scalability and stability will gain momentum, when speed is plentiful.
Which will favor the Hurd.
Having a userspace driver environment and being a microkernel with
translators are two VERY different things (Windows itself has provided
userland drivers for years, although its a hybrid kernel at best). In
addition, Linus would eat crow before turning Linux into a microkernel.
Coyotos is gunning for being a capability-based system. That is an
interesting take on systems design, and -- seemingly as always --
fine-grainedness of capabilities causes hair loss. I won't venture
deeply into this, since I haven't followed it closely, but one
discussion on coyotos-dev that I did get something out of was such
technicalities as how to toss a capability object around. I say
technicality, since it (from my novice eyesight) seems irrelevant to the
system design. My first thought was to implement the quickest way of
handling it -- typedef'ing a pointer type -- and simply write the
improvement of that on the TODO.
(But those guys and their discussions are a bit too hardcore for me to
fully grasp, so I'll excuse for any misunderstandings on my part).
I don't know anything abotu Coyotos so I won't comment on this.
Zooming further out
Picking up on my ``when speed is plentiful'' remark, I would assume that
the world will (very) soon see a lot of parallel computing.
Locking of resources, moving of tasks between processors and more exotic
things like transparently networked IPC will become more widespread, not
only in the Hurd. And consequently computers ought to get some hardware
support for optimizing these tasks. Just like wireless technologies
prompt for hardware acceleration of cryptographic routines, since it
would otherwise be somewhat of a drawback to take the step to wireless.
This should also lessen fear of performance issues with the Hurd. Linux
already is big enough to have a certain umph in hardware design, and if
stability-, scalability- and maintainability-oriented users start to use
Hurd, support from the chip vendors will probably catch up. (Like it
does with hardware virtualization technologies, these days).
It took many, many years for this to happen to Linux. I doubt we'll be
seeing this for hurd anytime soon. That being said, we use Linux drivers
in mach so our hardware support is equal to the version of Linux used for
our drivers (2.0 is what's in the current mach, but I'm working on porting
Continuing existence of the Hurd
Two points that I'm noting:
- What made Linux successful was its mere existence. When GNU was trying
to grasp microkernels, Linux did ``monolithic'' and got through with
reasonable success. It's proprietary competitors were (and are still)
using somewhat the same schemes of kernel design, which doesn't put
Linux in a totally unacceptable competitive position -- allowing Linux
to flourish because of its other advantages.
- The Hurd regularly gains the interest of newcomers. I've been lurking
along for long enough to know that the lists regularly get bumped with
some novice (arnuild being the latest, still fresh in memory) with
lots of enthusiasm over the prospects of the Hurd's goals. Alas, it
soon dawns on us newcomers that it's mostly toughlived vaporware, and
even getting an everyday system running isn't feasible, perhaps not
even doable on modern hardware.
By definition, we are NOT vaporware. We exist, and our work (as well as
source code) are publically available.
These two points tell me one thing, which I also commented above, about
the Debian Hurd port: The Hurd needs to continuously exist to survive.
It doesn't matter whether it sucks performance wise, or that there is no
sound or accelerated graphics. It doesn't matter either, that it's based
on an old-fashioned microkernel, from a user standpoint (albeit it does
clog up on the development pace and -joy).
That's a reason for forcing _some_ decision through, regarding the
future direction of work on the Hurd. It'll be too hard to satisfy my
thought-up existence requirement (assuming you concur it exists) if
development is stuck at Mach and closed development groups. And this
will scare off users and developers alike.
Mach isn't a closed development group. A closed development group would be
like the old FreeBSD and NetBSD groups from many years ago, as well as
nethack, in which case the development efforts are not public and the only
source given is with each point release. Mach accepts patches from anyone
of any size (my first patch for mach ever was a 1 liner which fixed my
network card in Parallels Desktop).
That being said (and I've commented before), we're not the fastest group
ever at getting patches into our projects, which is something that should
be corrected as best as possible, but we're not like other groups which
flat our reject patches. If you ever have submitted a patch to a project
before, you'll find that we're not quite as bad as some other projects.
That is in regards to glibc; I have no experience with there development
pratices (I work on Hurd and mach alone, as well as porting packages to
> DVCS and technical solutions to management problems
It was recently suggested that upstream CVS should be semi-transparently
be mapped onto some distributed version control system, so that
experimental development could find a home (or homes, for that matter).
While generally upgrading the VCS is sensible, I can't say that I agree
that this is a needed step. Which Alfred also mentioned in that the
ams-branch -- in his opinion -- didn't get any effect upstream.
What is however needed in my opinion, is, that development should be
more playful implying that it should be less structural.
While I do agree with this, moving to a DVCS would solve the problem
considerably. If you've ever done development on Linux, you'd know that
Linus's git branch is (for almost all intends and purposes) the offical
branch of Linux. When someone wants a patch added, Linus will ask for the
address of their git repo, and pull from that, and if he likes the work,
merges it into his git branch. With a DVCS, its like giving everyone
commit access, and it also makes the development environment more
An example of how this may be done can be seen in Linux' 2.4 vs.
series. 2.5 was never meant to be used in a production environment, but
was meant to mature into 2.6, which would become usable in a production
environment. While 2.5 was evolving, 2.4 was kept somewhat more stable.
(This scheme has since been abolished, as a sidenote).
This is not a technical issue, and DVCS'ing alone won't solve it. What's
needed is a gust of boldness, or at least, loosening of resistances to
do something not-necessarily-thought-through. Obviously, technical tools
will help out, but we all seem to be computer scientists here, so using
tools is the first thing that comes to mind.
I won't comment about the others, but I failed out of quite a few of my
tech classes (beside programming) because I couldn't do the math.
Everything I've done on mach is self-learned and self-taught.
Development model, human resources
While of course appreciating the work done, the needed development still
mostly hangs on a few shoulders. Recently, there was a dispute about
reluctance to patching on gnu-system-discuss that shed further light
onto the lack of enthusiasm for development from the senior members of
the community (not necessarily those of Hurd, I should add).
I didn't subscribe to gnu-system-discuss at the time (which is more or
less a dead list it seems), but I did look up the conversation in
Michael Banck (who's one of the debian developers in charge of Debian
GNU/Hurd) and ams were the two major people to respond to that thread. I
find that ams's views of Hurd seem to be somewhat out of touch of how Hurd
is at the moment. I know Thomas S., Michael B., and Marcus are both active
with Hurd development. Ronald works mostly with libc, and helps maintain
the Hurd specific parts (I don't know him that well, so I can't comment).
In this setting, I'd like to point at Ubuntu. While Ubuntu and its
derivatives still seem amateurish to me when I compare them with Debian,
I got to acknowledge how Ubuntu has been able to facilitate much of its
user base, no matter what level of expertise. When web-forums seem
too... How do I put it... ``Fashionable'' for Debian, Ubuntu doesn't
scare off its novice users with mailing lists. And when strict
packaging- and licensing requirements improve the quality of Debian's
packaged software, Ubuntu seemingly has (and allows for) a package for
everything in its ``multiverse''.
The only recent noivce message I got was one that no one has replied to
yet. When I was came to hurd, I had considerable help from people on the
IRC channel to debugging my installation.
The Hurd wiki has been a source of documentation from user to user.
While often its articles have been written by senior members of the
community (since they understood the stuff first) success-stories and
FAQ's may be added by junior members. Openbox has recently moved its
website to MediaWiki for seemingly those reasons. Especially the Hurd
wiki page for using Hurd on qemu has seen some use.
To my knowledge, gnu.org webspace doesn't offer PHP or MySQL that would be
needed for a wiki page. Can someone correct me if wrong?
Also, sources should be more up-front. It's not because I've been
looking for it, but I still haven't got the cvs checkout command
memorized. If I had, I might be more encouraged to check out the source
tree and have a look around -- who knows what I might do?
Most of the time, checking out is done extactly once, and then cvs up is
used to stay up to date. When I need to check out clean, I copy and paste
the command from the website.
(Where ``I'' is someone mildly intertwined with Hurd development).
All words but no action
It's not the first time I'm posting my frustrations about the Hurd's
state. And the previous times I'd get more-or-less assaulted for having
a big mouth and no coding-creditability. ``If you think it's wrong, you
fix it. But don't blame us, who made actual contributions''.
My up-front answer to that is, that I've made no claims of being able to
solve anything. I -- as all of us -- have no spare time for this project
(which is a weird way of saying that it doesn't really interest me
enough to sacrifice what ever time I have to the project -- because if
enough interest was present, finding time wouldn't be a problem,
I however, am at the bottom of the hierarchy. I'm the pondering lurker,
whose sole ability is to stir up stuff using arguments that hopefully
will convince others to take action.
What action is to be taken, in my opinion?
It's easy. It's about realizing that problems exist (or, why I'm
I completely agree. In one of my last emails, I posted about that, and its
a mentality that you need to be able to code to point out problems and
such. I'm not sure how to help the issues, but on these lists, everyone
are equals to me (which hopefully I prove by spending over 2 hours
drafting this response :-)).
> Good night, and thanks for reading, >
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU)
-----END PGP SIGNATURE-----