Reputation: 37
#include <stdio.h>
int main(int argc, const char * argv[])
{
int x =10, y =20, b = 500;
int z = x*y;
int f = z/b;
// insert code here...
printf("x is:%d, y is:%d, b is %d\n",x,y,b);
printf("x times y is: %d\n",z);
printf("z divided by b is: %d\n",f);
return 0;
}
on print out f = 0. Why?
Upvotes: 2
Views: 166
Reputation: 5489
I am not "fluent" in C but I think you should use float instead of int. A division of an integer by an integer will return an integer.
Note also that you should use %f instead of %d to display float in prinf
Your code should be :
//
// main.c
// cmd4
//
// Created by Kevin Rudd on 27/06/13.
// Copyright (c) 2013 Charlie Brown. All rights reserved.
//
#include <stdio.h>
int main(int argc, const char * argv[])
{
float x =10.0, y =20.0, b = 500.0;
float z = x*y;
float f = z/b;
// insert code here...
printf("x is:%f, y is:%f, b is %f\n",x,y,b);
printf("x times y is: %f\n",z);
printf("z divided by b is: %f\n",f);
return 0;
}
Upvotes: 2
Reputation: 1705
The int
type is an integer, which holds values that you can use to count (1, 2, 3...). It does not handle anything past a decimal point.
If I were to assign a value with anything past the decimal to an int
, all the numbers on the right-hand side of the decimal would be truncated:
int v = 3.14159
would leave me with a value of 3
for v
, because the integer can't store the .14159.
Your value of 200/500 is 0.4, which is truncated to 0 when it is assigned to the int f
.
In order to store decimal values, you have to use a float
or double
type. Do note that these types are not as precise as you might think, so if you assign a value of 4.57 you might end up with an actual value of something like 4.569999999....
In your code, you'd want to change the type of f
to a float
, and you'd probably want to do a cast from integer to float on the items you're dividing to make sure they keep any floating point information.
So, your line of
int f = z/b;
would become
float f = (float)z/(float)b;
and then you'd use the %.1f
in your printf
as @BalogPal suggested.
Upvotes: 5
Reputation: 17163
Integer division is defined that way. It just drops the remainder for 300/500 and leaves you with 0.
Upvotes: 0
Reputation: 118
Because the integer division of a and b is 0 when a is less than b you know. You see C behaves differently from Python!
Upvotes: -1