Reputation: 127
I'm working on a problem that troubling me for a while. Still can't understand. It makes sense to change the type of m
, but I still can't understand what m
went through in this process. How does its value change?
Below is the code.
I've tried it on code blocks.
#include<stdio.h>
int main()
{
int m=5;
m = (float)m / (2.0);//There, shouldn't m be 2.500000?
printf("%f\n", m);
printf("%0.2f\n",(float)m/2.0);
getchar();
return 0;
}
Expected result:
2.500000 1.25
Actual result:
nan 1.00
Upvotes: 1
Views: 90
Reputation: 16876
m
is an int
. So if you do m = (float)m / (2.0);
, then even if (float)m / (2.0)
is 2.5
, it will be truncated to be stored in m
as 2
. So when you do this
printf("%f\n", m);
You have undefined behavior because it's an int
and you say it's a float
.
Later you do this
printf("%0.2f\n",(float)m/2.0);
and you get 1.0
because m
was 2
and you divide it by 2.0
. You can get the expected output by changing the type of m
to float
instead.
Upvotes: 5
Reputation: 108986
printf("%f\n", m); // m is int
Nope. "%f"
does not go with int
.
Either use "%d"
or convert the value
printf("%d\n", m);
printf("%f\n", (double)m);
Note that converting to float
forces a second automatic conversion to double
printf("%f\n", (float)m); // convert twice
Upvotes: 1