[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#651529: rsyslog: FTBFS on hurd-i386



Hi!

On Tue, 2011-12-13 at 18:44:11 +0100, Michael Biebl wrote:
> but WTH does GNU/hurd not simply #define PATH_MAX and be done with it.

While that could be the easy way out, it would definitely be wrong.
Such limits can be OS or filesystem specific, if at all. They do not
even represent reality on GNU/Linux! Try this:

$ printf '#include <limits.h>\nPATH_MAX' | cpp -P
$ d=0123456789; for i in `seq 1 1000`; do mkdir $d; cd $d 2>/dev/null; done
$ pwd | wc -c

pwd(1) is even specified to be able to return paths longer than
PATH_MAX by POSIX:

  <http://pubs.opengroup.org/onlinepubs/9699919799/utilities/pwd.html>

Also once set, the MAX macros becomes part of the ABI, and cannot be
increased w/o recompiling any code using them.

As such, using dynamically allocated buffers while it might seem a bit
more work, it also seems superior in all other accounts, it will use
less memory, it should not suffer from accidental truncation, it will
accomodate any arbitrary path length w/o problem, etc.

In any case, I'd rather see the MAX macros removed from GNU/*.

> This would solve 90% of the build failures we have in Debian.

On 2007 Michael Banck did some analysis [0] and from that around 20%
could be said to be affected definitely by MAX macro issues. While now
pretty outdated, I've serious doubts the % has changed so much to get
close to the number you wrote above?

 [0] <http://lists.gnu.org/archive/html/bug-hurd/2007-07/msg00001.html>

> This all looks like a major waste of time to me.

Well, writing correct software takes time, yes.

regards,
guillem


Reply to: