Reputation: 197
I wanted to typecast a float with the following code. change is a float and changeI is an int. The tricky input value is 0.41 since it can't be accurately represented in binary but I don't understand why this does not work.
changeI = (int)(change*100)
If I print just (change*100.0) it prints 41.0000003 or something equivalent, which to my understanding if I typecast i.e (int) it should just drop the floating point bits and leave the number as 41.
However, if I print the value of changeI I get 413 and I don't really know why.
Also, if I manually print (int)41.0000003 it is also printed as 413
Edit:
Ok, so I found the main problem, which is really stupid by the way (i was printing an extra 3 afterwards and forgot the new line character in the previous print but now it prints 40 instead of 41). This is the full code.
int flag = 0;
float change;
int changeI;
do{
printf("How much change is owed?: ");
scanf("%f", &change);
if (change >= 0){
flag = 1;
}
}while(!flag)
changeI = (int)(change*100.0);
printf("%f\n", change*100.0);
printf("%i\n", changeI);
printf("%i\n", (int)(41.0000003));
The output for 0.41 is:
>41.0000003
>40
>41
Edit2: As JackV stated, if you copy the float multiplied by 100 to another float and then typecast that new float it does print 41 but I would still like to know why does this happens.
Upvotes: 0
Views: 81
Reputation: 218
I dont know exactly what you are doing but this:
#include<stdio.h>
int main()
{
int changeI, newI;
float change = 0.41, var = change * 100;
changeI = (int)(change*100);
newI = (int)(var);
printf("%f", var); //prints 41.000000
printf("int = %d\n", changeI); //prints 40
printf("float = %f", change); //prints 0.410000
printf("%d", newI); //prints 41
return 0;
}
Works exactly as it should it prints:
int = 40
float = 0.410000
Upvotes: 1