kartikmohta
kartikmohta

Reputation: 866

GCC compile time division error

Can someone explain this behaviour?

test.c:

#include <stdio.h>

int main(void)
{
    printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
    printf("%f, %f\n", (300.6000/0.05000), (300.65000/0.05000));
    return 0;
}

$ gcc test.c

$ ./a.out
6012, 6012
6012.000000, 6013.000000

I checked the assembly code and it puts both the arguments of the first printf as 6012, so it seems to be a compile time bug.

Upvotes: 2

Views: 393

Answers (3)

lc.
lc.

Reputation: 116528

Sounds like a rounding error. 300.65000/0.05000 is being calculated (floating point) as something like 6012.99999999. When casting as an int, it gets truncated to 6012. Of course this is all being precalculated in the compiler optimizations, so the final binary just contains the value 6012, which is what you're seeing.

The reason you don't see the same in your second statement is because it's being rounded for display by printf, and not truncated, as is what happens when you cast to int. (See @John Kugelman's answer.)

Upvotes: 1

John Kugelman
John Kugelman

Reputation: 362037

printf() rounds floating point numbers when you print them. If you add more precision you can see what's happening:

$ cat gccfloat.c
#include <stdio.h>

int main(void)
{
    printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
    printf("%.15f, %.15f\n", (300.6000/0.05000), (300.65000/0.05000));
    return 0;
}

$ ./gccfloat
6012, 6012
6012.000000000000000, 6012.999999999999091

Upvotes: 7

Matthew Flaschen
Matthew Flaschen

Reputation: 285017

Run

#include <stdio.h>

int main(void)
{
    printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
    printf("%.20f %.20f\n", (300.6000/0.05000), (300.65000/0.05000));
    return 0;
}

and it should be more clear. The value of the second one (after floating point division, which is not exact) is ~6012.9999999999991, so when you truncate it with (int), gcc is smart enough to put in 6012 at compile time.

When you print the floats, printf by default formats them for display with only 6 digits of precision, which means the second prints as 6013.000000.

Upvotes: 10

Related Questions