[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Hurd PPP

After reviewing the debian-hurd, bug-hurd, and help-hurd mailing lists, I
see that there has been very little talk about PPP and what we need to do to
implement it.

As you all know, most Debian users' networking is limited to PPP over a
dial-up modem.  I know my system is like this, as is Marcus's.

I think we need to make PPP a real priority, especially if we mean for this
hurd port to be a viable distribution.  Until we can send/receive e-mail and
run apt-get under hurd, we have little more than a toy operating system.

I am attaching some stuff from the various hurd lists about ppp, and invite
comments about how to proceed in implementing PPP.



>From Gordon Matzigkeit (in response to a prior request of mine):
>You're thinking of Linux, where adding PPP support would mean hacking
>the Almighty Kernel. On the Hurd, it involves writing a network server,
> is a separate unprivileged program with no special privileges. There is no
> reason to have a split between /dev/ppp0 and /usr/sbin/pppd... you can do
> it all in one program, /hurd/ppp.
> /hurd/ppp should probably take a command-line argument that specifies a
> separate server which can return a live connection (i.e. a modem dialer or
> terminal console). So, /hurd/ppp would call the second server, then
> establish the PPP protocol over the returned connection. Other programs
> would contact the ppp server just as if it was a network server (pfinet),
> /hurd/ppp would do all the muxing/demuxing. Anyway, that's just my
> blathering: Thomas Bushnell (formerly Michael) is the Hurd's architect,
> he really is the final word as far as judging whether a given design is a
> good one or not. You have the added advantage of the fact that Thomas
> was thinking about PPP when he designed the Hurd network interfaces,
> so the interfaces are in place, and you don't need to muck around in Hurd
> guts. This is somebody's opportunity to learn more about the Hurd, plus
> write something which is generally useful, and more elegant than its Unix
> workalike. Note that there are *plenty* of these kinds of projects
> so *please* nobody steal this project just because they want fame and
> glory. We need to do it right the first time, and look before we leap.
>Thomas Bushnell Replied:
> What Gord says (especially his rough ideas about what ppp support would
> look like) is quite accurate. I would say, however, that this one
> quoted above is not quite so--one area of infrastructure work still to be
> done is to think about how to handle multiple network services. Right now,
> the model is inferior; it would involve turning on PPP support in the
> network server, and then adding glue code to plug in to it. This would
> very much like the existing way things work on linux, with a network
> in the network server (instead of in the kernel) and then a separate
> program like pppd that did other stuff. But I'd like a better scheme;
> one hasn't occurred to me yet. But getting ppp to work in the current
> scheme is doable, it's just that the current scheme for doing network
> in the Hurd is not entirely as nice as I'd like it.
> Gordon Matzigkeit:
> How about a mux/demux network server that would only manage the
> routing tables and ports to other network servers? I think the only nasty
> thing about pfinet is that it has Ethernet-specific stuff in it (n'est
pas?). If we
> abstract out the routing tables, which are necessary for all IPv4 stuff,
> pfinet can look a lot more like a central coordinator which delegates to
> hurd/ethernet and /hurd/ppp.
>Thomas Bushnell:
> Well, the problem with that (which is roughly the right idea, I think) is
that it
> imposes an extra RPC in a *VERY* critical path. Moreover: while the IP
> stuff would be in a thoroughly different server from the underlying stuff,
> not so easy. The key is that there is not enough information in a raw
> packet to deliver it; there is some interaction between the protocol
> engines and the net-layer things too. That interaction is type-of-network
> dependent, so adding a new type of network (Ethernet, PPP, etc.) would
> still require changing the pfinet server.
> Me:
>Two questions: 1. Using Gordon's concept, could we set up a group of
>servers that would speak straight Unix-y TCP/IP networking lingo, then
>convert it to whatever type of specific connection you are using. For
>example, ethernet, PPP, SLIP, etc. I.e., keep the implementation
>transparent to the user -- I just exec "activate-network" or whatever, (or
>maybe it kicks in at boot) and then I can run netscape, ftp, mail, etc.,
>without worrying about whether I am using ppp or an ethernet connection
>Gordon Matzigkeit
>I was thinking of imposing this layer only in `connect' and `listen', which
>not a critical path. So, a program asks pfinet to bind to TCP port 80.
>passes the program's callback port send right and config information to all
>the appropriate subservers (such as loopback, ethernet, ppp, etc). ppp
>receives a packet, and knows how to contact the process directly, rather
>than going through pfinet. Likewise, ethernet can contact the process
>directly. For outgoing connections, I say that I want to connect to a host
>, and pfinet figures out that it's on the ethernet
>so it passes its receive rights and config data to the ethernet server,
>actually establishes and maintains the connection.
> TB> That interaction is type-of-network dependent, so adding a new
> TB> type of network (Ethernet, PPP, etc.) would still require
> TB> changing the pfinet server.
>I think in the above model that isn't a problem. This is using pfinet in a
>that is consistent with filesystem translators. I think it would be
worthwhile to
>figure out a directory tree for /servers/socket/inet (whose contents would
>be translated by pfinet), so that people could manipulate pfinet more
>directly. The following sequences of commands would be exactly
> $ ifconfig eth0 broadcast netmask
> $ ifconfig lo0 broadcast netmask
> $ route add default gw
> $
> $ settrans /servers/socket/inet/eth0 /hurd/ipv4/ethernet eth0 \ --
>        address= --netmask= \ --
>        broadcast= --gateway=
> $ settrans /servers/socket/inet/lo0 /hurd/ipv4/loopback \ --
>        address= --netmask= \ --
>        broadcast=
> $ ln -s eth0 /servers/socket/inet/
> $ ln -s lo0 /servers/socket/inet/
> $ ln -s eth0 /servers/socket/inet/
> Both would result in:
> $ ls -l /servers/socket/inet lrwxrwxrwx [...] -> eth0 lrwxrwxrwx
> -> lo0 lrwxrwxrwx [...] -> eth0 lrwxrwxrwx
> -> drwxr-xr-x [...] eth0 drwxr-xr-x [...] lo0
> $ ls /servers/socket/inet/eth0 latency usage [...]
> $ showtrans /servers/socket/inet/eth0 /hurd/ipv4/ethernet eth0 --
>        address= --netmask= \ --
>        broadcast= --gateway=
> $ showtrans /servers/socket/inet/lo0 /hurd/ipv4/loopback --
>        address= --netmask= \ --
>        broadcast=
> $
>Note how the above display uses continuous address ranges instead of
>silly netmasks. As shown, addresses that are not routable just need to
> have an invalid symlink (the empty string, in this case). The
> /servers/socket/inet/eth0 directory (and lo0 directory) would contain a
> files which have read-only statistics and tunable parameters for that
> interface. --
Thomas Bushnell:
>I remember now a previous idea I had for TCP-IP in the Hurd someday,
>and I'd like to post it...I don't have time for lots of discussion, but
this idea
>might be interesting to some: Have the code and such that does all basic
>TCP-IP stuff in a library. Have each network interface have a server
>associated with it, like we do now for filesystems (each partition has a
>different server) Have a shared memory area that holds the routing
>information; the network servers map this read-only.   Have gated (or
>whatever) manage the routing information, mapping it read-write.   Have a
>master server which handles initial socket calls, knows the routing tables,
>and once connect is called, the socket can be handed off to the particular
>server. The socket would be handed back if it's a datagram socket and
>connect is called again. Datagram sockets with connect never called
>would have their requests undergo an extra layer of RPC's as each send
>has to be separately forwarded from the master server to the particular
>server. I have done no work at all towards designing the special protocol
>between the master server and the other servers, let alone anything like a
>generic library. Thomas
And finally:
>Subject: planning to port perl, plip, ppp
>During the next two months, I expect to spend several weeks installing and
>developing for the Hurd more-or-less full-time. I've been preparing by...
>1. getting used to Debian with Linux
>2. reading _Programming under Mach_
>3. following the mailing lists and reading the FAQ.
>Projects which particularly interest me are:
>1. Make Perl use Hurd's threads if it doesn't already, and write Perl
>modules for doing IPC, VM, etc.
>2. See if the Linux 2.0.36 PLIP and PPP drivers work, and make them
>work if they don't.
>3. If there's time, look into Linux Glibc bincompat issues.
>Although I probably won't start installing the Hurd for at least another
>or two, I welcome suggestions about what to read and think about in the
>Thanks -John From kettenis@wins.uva.nl Fri Nov 20

I don't know if he had a chance to get anywhere...

Reply to: