> This should be "if(strlen(str) >= 1024)". You've forgotten to account > for the 0 byte. There are similar problems elsewhere. > > It would be better to use "sizeof stuff" instead of 1024 (since stuff > is an array in this case) and even better to avoid a fixed limit on > string size altogether (there are useful URLs out there longer than > 1023 characters). The better thing to do is not to use some arbirtary limit, but rather to use MAXHOSTNAMELEN. However, this constant, although defined by many operating systems, is not as portable as you may wish: it is not specified by either POSIX or the upcoming release of the third Single Unix Specification. Thus, when this constant is not defined, you must query the OS via `sysconf (_SC_HOST_NAME_MAX)' for the maximum hostname length. The operating system is, however, allowed to return -1 indicating that there is no limit. In this case, the portable thing to do is not to impose a random upper limit but do something like the following: do { size *= 2; tmp = realloc (buf, size); if (! tmp) { free (buf); break; } buf = tmp; /* Your function, e.g. gethostname */ } while (errno == ENAMETOOLONG); Of course, this is long and noone wants to do this and, in fact, it seems they often do not. However, as the Hurd imposes no limit we see this kind of problem all the time when compiling programs on GNU/Hurd. For the case of gethostname (an extremely common one), I wrote a xgethostname wrapper (and put it in the public domain, ftp://walfield.org/pub/people/neal/xgethostname), which does this and is a near drop in replacement for the mess which is caused by gethostname. Oh wonderful maintainers and code writers, be nice and avoid MAXHOSTNAMELEN, PATH_MAX and other silly limitations which the GNU system does not impose and, at the same time, make your packages more portable.
Attachment:
pgplJK_qPGFpW.pgp
Description: PGP signature