Reputation: 77
Below program outputs different values for a/100 where a is int.
void main()
{
int a = -150;
float f;
f = a/100;
printf(" %f,%f ",f,a/100);
}
Output: -1.000000,0.000000
Here second %f value comes up to 0.000000, but how is it possible, it should be -1.000000 or -1.500000.
Upvotes: 0
Views: 61
Reputation: 140168
There is no inconsistency:
f = a/100;
performs integer division stored into a float: result 1.0
then
printf(" %f ",a/100);
prints an integer (1), but with float format: undefined behaviour.
this works without surprises:
void main()
{
int a = -150;
float f;
f = a/100.0;
printf(" %f,%f ",f,a/100.0);
}
printf
is a variable arguments function (prototype: void printf(const char *,...)
), which has no other choice than to "believe" the format that you pass.
If you tell printf
that second argument is a float, then it will read the argument bytestream as a float
, and if the data doesn't match a float
structure, well, it won't work properly.
EDIT: compiler makers know that this is a common mistake, so they have added a special check for the printf-like functions: if you compile your file with gcc using -Wall
or -Wformat
options you'll get the following explicit message:
foo.c:8:1: warning: format '%f' expects argument of type 'double', but argument 3 has type 'int' [-Wformat=]
printf(" %f,%f ",f,150/100);
(same goes if the number of %
-like format arguments and the number of arguments do not match).
Upvotes: 4