Olaf Weber wrote: > Going back to the original code, the assumption that > > int i; > for (i = 0; i > 0; ++i) > ; > > will terminate, and will terminate within a reasonable amount of time, > is not warranted. The program _may_ crash when i overflows, and the > int type may be large enough that the program effectively hangs. I agree that it will take an unreasonable amount of time on a system with 64-bit ints. As to whether it may crash, I think it depends on whether we're talking about theory or practice. The standard does not specify overflow behavior, so it would not be "wrong" for it to crash in that situation. However, I know of no commonly-used C compiler on any Debian-supported platform (or, for that matter, MS Windows or MacOS) that would generate code that would do this, so in practice I think that it can be relied on to exit the loop normally on overflow. This is not to excuse the code in any way; it's lame. I just think that this discussion is focusing too much on what standards documents say, and not enough on what actually happens in the real world. I don't think it's at all likely that future versions of gcc (or other well-known compilers) will crash on integer overflow; reliance on overflow from INT_MAX to INT_MIN is sufficiently common in C code that I think any compiler that did not handle it well would be considered unusable. Craig
Attachment:
pgpRrkDYBcHRe.pgp
Description: PGP signature