Reputation: 425
I would like to better understand floating point values and the imprecisions associated.
Following are two snippets with slight modifications
Snippet 1
#include <stdio.h>
int main(void)
{
float a = 12.59;
printf("%.100f\n", a);
}
Output:
12.5900001525878906250000000000000000000000000000000000000000000000000000000000000000000000000000000000
Snippet 2:
#include <stdio.h>
int main(void)
{
printf("%.100f\n", 12.59);
}
Output:
12.589999999999999857891452847979962825775146484375000000000000000000000000000000000000000000000000000
Why is there a difference in both the outputs? I'm unable to understand the catch.
Upvotes: 0
Views: 122
Reputation: 146
In Snippet 1, your float gets cast into a double, and this casting causes a change in the value (due to the intricacies of floating point representation).
In Snippet 2, this cast doesn't happen, it's printed directly as a double.
To understand, try running the snippets below:
#include <stdio.h>
int main(void) {
double a = 12.59;
printf("%.100f\n", a);
return 0;
}
and
int main(void) {
float a = 12.59;
printf("%.100f\n", (double)a);
return 0;
}
Refer to this for more information: How does printf and co differentiate beetween float and double
Upvotes: 1
Reputation: 171
to get the consistent behaviour you can explicitly use floating point literal:
printf("%.100f\n", 12.59f);
Upvotes: 2
Reputation: 786
In first case you have defined variable as a float and in second case you directly given the number.
System might be consider direct number as a double and not float.
So,I think may be it is because of system definition of float and double.
Upvotes: 2