Reputation: 57
I am trying to add an int to a float. My code is:
int main() {
char paus[2];
int millit = 5085840;
float dmillit = .000005;
float dbuffer;
printf("(float)milit + dmillit: %f\n",(float)millit + dmillit);
dbuffer = (float)millit + dmillit;
printf("dbuffer: %f\n",dbuffer);
fgets(paus,2,stdin);
return 0;
}
The output looks like:
(float)millit + dmillit: 5085840.000005
dbuffer: 5085840.000000
Why is there a difference? I've also noticed that if I change dmillit = .5, then both outputs are the same (5085840.5), which is what I would expect. Why is this? Thanks!
Upvotes: 0
Views: 498
Reputation: 1353
the precision that you are trying to use is too big for a float. On the printf function it is being casted into a double to be printed.
See the page IEEE 754 float calculator to better understand this.
Upvotes: 4
Reputation: 103485
printf("(float)milit + dmillit: %f\n",(float)millit + dmillit);
I believe that here, the addition is done as a double, and passed as a double to printf.
dbuffer = (float)millit + dmillit;
printf("dbuffer: %f\n",dbuffer);
Here's the addition is done as a double, then reduced to a float to be stored in dbuffer, then expanded back to a double to be passed to printf.
Upvotes: 1
Reputation: 41232
I believe the floating point addition in the printf statement may be silently casting the result to a double
, so it has more precision.
Upvotes: 1
Reputation: 171734
(float)millit + dmillit evaluates to a double value. When you print the value, it displays correctly, but when you store it in a float variable, the precision is lost.
Upvotes: 7