[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sqrt C function



dman wrote:

> On Tue, Aug 14, 2001 at 07:34:26AM -0700, David Roundy wrote:
> | On Mon, Aug 13, 2001 at 12:37:45PM -0500, Dimitri Maziuk wrote:
> | > * Craig Dickson (crdic@yahoo.com) spake thusly:
> | > > I don't see how. I see it as a legitimate compiler optimization. If you
> | > > have "double f = 4;", and you compile 4 as a double-precision value
> | > > rather than as an int (which would then require an immediate
> | > > conversion), how could that possibly break a program?
> | > 
> | > Very simple: double f = 4 may be converted to eg. 4.000000000000000001234,
> | > and any test for (sqrt(f) == 2.0) will fail. Of course if your (generic 
> | > "you", not personal) code is like that, you probably shouldn't be playing 
> | > with floats.
> | 
> | Actually, any 32 bit int will be exactly converted into a double with no
> | loss of precision...
> | 
> | As far as the language definition goes, if you say double f = 4, the
> | language assures you that the '4' will be converted to a double format.
> | Whether it is done at compile time or at runtime makes no difference.
> 
> The point is that binary FP can only represent a subset of floating
> point numbers.  For example .1 can NOT be represented exactly by
> binary FP.  If you ever get some floating point operations to yield
> the exact value you are looking for (2.0 in the above example) you are
> lucky.

That's not true of integer values, though. An IEEE double can precisely
represent any 32-bit integer value. Once you start performing arithmetic
with it, it may well cease to be an exact integer value, but a
declaration like "double f = 4;" is not at any risk of being anything
other than 4.0. There's no arithmetic occurring; just a conversion
between two numerical formats, both of which are capable of representing
the desired value exactly. And the conversion algorithm from integer to
floating-point is trivial, with no risk of loss of precision as long as
the integer's representation doesn't require more bits than are
available in the fp mantissa. A double has 52 bits of mantissa, and so
is more than capable of representing a 32-bit integer exactly.

But this is all quite irrelevant to the original question, the point of
which was, I think, missed by Dimitri Maziuk. Let's back off here and
restate the original problem, which was:

(1) Does the declaration "double f = 4;" require a conforming C compiler
to compile an integer constant of 4 and convert it to a double at
runtime, or is it an acceptable optimization in this case for the
compiler to instead compile 4 as a double-precision constant?

I believe that this question is precisely equivalent to this alternate
formulation:

(2) Is there any essential difference between the following two
declarations:

    double f = 4;
    double f = 4.0;

such that a conforming C compiler is prohibited from compiling them
to identical object code?

I don't see that it is at all conceivable that a valid C program could
be broken by the compiler doing it one way or the other. Therefore, it
is a valid optimization to do the floating-point conversion at compile
time.

David Roundy's response tried to get back to the original question, and
he seems to agree with my view that the int-to-double conversion may
take place at either compile time or runtime without affecting anything
other than performance.

You (dman) seem to be following Dimitri in not dealing with the real
question, instead drifting off into what seems to me to be a completely
irrelevant discussion of floating-point precision. Nobody has argued
that floating-point numbers can represent all rational numbers exactly.
No one has advocated using equality tests to compare the results of
floating-point calculations. So I think you're attacking a straw man by
raising these issues.

Craig



Reply to: