Mitchell Welch
Mitchell Welch

Reputation: 64

Why do I need to typecast all of the values here?

I'm writing a very basic program in C (new to C so just getting the hang of things) and I am struggling with TypeCasting.

It's a simple calculation program which allows you to enter two numbers and decide what you want to do with them (add, subtract, divide and multiply). Obviously, not all numbers are divisible by all other numbers and so in my switch statement for the division part I am trying to typecast the int to a float.

As requested, my variable declarations are here:

    int inputOne, inputTwo, calculatedValue = 0;
char calcType;
char restart = 'n';

I've got it to work by making all of the values typecasted, but here is what I was trying before:

        case 'd': case 'D':
        calculatedValue = inputOne / inputTwo;
        printf("%s%d%s%d%s%1.2f", "The value of ", inputOne, " divided by ", inputTwo, " is equal to ", (float)calculatedValue);
        printf("\nPlease note: This is only accurate to two decimal places.");
        break;

This only returns 4 without any decimal places. I then thought maybe it is due to the fact the calculation is still in int, so I changed it to this:

        case 'd': case 'D':
        (float)calculatedValue = inputOne / inputTwo;
        printf("%s%d%s%d%s%1.2f", "The value of ", inputOne, " divided by ", inputTwo, " is equal to ", (float)calculatedValue);
        printf("\nPlease note: This is only accurate to two decimal places.");
        break;

But this only returns 1082130432.00. No idea where that number comes from.

I got around it by using this:

        case 'd': case 'D':
        printf("%s%d%s%d%s%1.2f", "The value of ", inputOne, " divided by ", inputTwo, " is equal to ", (float)calculatedValue = (float)inputOne / (float)inputTwo);
        printf("\nPlease note: This is only accurate to two decimal places.");
        break;

Because it is in a switch statement, I could have got around just by using a float instead of the original integer, but I have done this code to try and understand typecasting a bit better and was under the impression it would work without having to typecast everything. Could anyone shine some light? My lecturer used the typecast in the printf and it worked fine. In his example he is trying to find the average age of students and he done so like this:

printf("\nAverage age: %1.2f", (cAge1 + cAge2 + cAge3) / (float)cNumStudents);

Hopefully that makes sense and sorry for it being so long for a simple question, I'm just trying to understand why I had to typecast everything (as my lecturer said you only need to do it to one value).

Cheers

Edit:

I just tried the following but it returns 0.00, rather than the answer.

calculatedValue = (float)inputOne / inputTwo;
        printf("%s%d%s%d%s%1.2f", "The value of ", inputOne, " divided by ", inputTwo, " is equal to ", calculatedValue);
        printf("\nPlease note: This is only accurate to two decimal places.");
        break;

Upvotes: 1

Views: 429

Answers (4)

Jabberwocky
Jabberwocky

Reputation: 50774

The problematic part of your program boils down to this:

int inputOne = 10, inputTwo = 3, calculatedValue;
calculatedValue = inputOne / inputTwo;

calculatedValue is an int, and therefore it cannot have a fractional part. So here calculatedValue contains 3 after the division.

Now if you print it with printf("%f", (float)calculatedValue) you will get an output of 3.000000, because the (float) cast will simply convert the integer value in calculatedValue (=3) to a float (=3.0).

In oder to get the expected result you might want to do this:

int inputOne = 10, inputTwo = 3;
float calculatedValue;
calculatedValue = inputOne / inputTwo;

But again calculatedValue will contain 3.0 and not the expected 3.33333. Now here the problem is that the expression inputOne / inputTwo is an int whose value is 3 (remember, both inputOne and inputTwo are ints), so basically you still assign the int! value 3 to the float variable calculatedValue.

In order to get the expected result you can do this:

float inputOne = 10, float = 3;
float calculatedValue;
calculatedValue = inputOne / inputTwo;

All variables being of type float, the expresion inputOne / inputTwo will be a float and calculatedValue will eventually contain 3.33333.

int inputOne = 10, float = 3;
float calculatedValue;
calculatedValue = (float)inputOne / inputTwo;

Here the expression (float)inputOne / inputTwo is of type float (we divide a float by an int) and again yo'ill get the expected result.

Upvotes: 0

Steve Summit
Steve Summit

Reputation: 47952

You have to decide if you're writing a calculator for integer values, or floating-point values.

If you're writing a calculator for integer values, you don't need any casts. When you write

calculatedValue = inputOne / inputTwo;

you will get an integer result with the remainder discarded, but that's the appropriate result for an integer-only calculator.

If you want a calculator for floating-point values, your best bet will to be declare inputOne, inputTwo, and calculatedResult all as double. In that case, you still won't need any casts, because now, when you write

calculatedValue  = inputOne / inputTwo;

you will get floating-point division and a floating-point result.

As others have explained, if for whatever reason you have inputOne and inputTwo as integers, but you want a floating-point result, you will have to declare calculatedResult as double, and write

calculatedResult = (doiuble)inputOne / inputTwo;

You've also asked about the line

printf("%s%d%s%d%s%1.2f", "The value of ", inputOne, " divided by ", inputTwo, " is equal to ", (float)calculatedValue = (float)inputOne / (float)inputTwo);

and claimed that it works somehow. But this is not C, so we can't explain it. (What compiler are you using?) In C, the expression

(float)calculatedValue = (float)inputOne / (float)inputTwo

is illegal, because the left-hand side

(float)calculatedValue

is an rvalue, and is not assignable.

Upvotes: 1

chux
chux

Reputation: 153498

int inputOne, inputTwo;
calculatedValue = (float)inputOne / inputTwo;

The cast is not needed - there are alternatives, yet it is used to insure the division is computed with floating-point math rather than integer math.

Yet this is only the one step that needs review: what is the type of calculatedValue? If that is also an int, the float quotient is then converted back to an int. and then the FP division was not of use.

int calculatedValue  = (float)inputOne / inputTwo; // poor code

Instead use a floating point (FP) type

float calculatedValue  = (float)inputOne / inputTwo; // better

Since calculatedValue, as a float, is converted to a double for printing, consider using double math for all typical FP operations and reserve float for select uses of space/speed.

double calculatedValue  = (double)inputOne / inputTwo; // better yet
printf("%.2f", calculatedValue);

In general, avoid casting. It tends to hide problems. Alternatives:

double calculatedValue  = 1.0 * inputOne / inputTwo;
// or 
double calculatedValue  = inputOne;
calculatedValue /= inputTwo;

Note the 1 in %1.2f" serves no purpose. 1 is the specified minimum width of the printed number and a number always prints out with at least 1 character. %1.2f" will always print out with at least 4 characters due to the ".2" part. Consider %.2f".

Upvotes: 1

SLaks
SLaks

Reputation: 887453

Your problem is the division itself.

Dividing two ints produces another int. Since it's an int, it cannot possible have a decimal.

Casting that int to float cannot magically reintroduce the decimal that it never had.

You need to cast either of the operands to float so that the division itself doesn't produce an int. You don't need any other casts.

Upvotes: 2

Related Questions