> > The better thing to do is not to use some arbirtary limit, but > > rather to use MAXHOSTNAMELEN. > > The requirement is for a buffer for URLs, not merely for hostnames, so > this is poor advice. Ok, I did not actually look at the code, however, in that case, you are correct. > > However, this constant, although defined by many operating systems, > > is not as portable as you may wish: it is not specified by either > > POSIX or the upcoming release of the third Single Unix > > Specification. Thus, when this constant is not defined, you must > > query the OS via `sysconf (_SC_HOST_NAME_MAX)' for the maximum > > hostname length. The operating system is, however, allowed to > > return -1 indicating that there is no limit. In this case, the > > portable thing to do is not to impose a random upper limit but do > > something like the following: > > This is better advice, though I already suggested completely avoiding > a fixed limit, as you will notice above. I saw, I was only adding to the conversation. > Incidentally do the GNU/Hurd developers imagine there will be > hostnames longer than 255 characters? It will never see any from the > DNS (RFC1035 2.3.4), and I doubt many people would have the patience > to type in hostnames even half that long. They could be machine generated (e.g. an encoding scheme for cgi). And we just want to avoid all unnecessary limits, who can say what will happen next year, never mind in ten. And remember, 640k of ram aught to be enough for anyone.
Description: PGP signature