Reputation: 26355
In C#, I am doing something like this:
float a = 4.0f;
float b = 84.5f;
int ans = a * b;
However, the compiler states that a cast is required to go from float -> int in assignment. Of course I could probably do this:
int ans = (int)a * (int)b;
But this is ugly and redundant. Is there a better way? I know in C++ I could do this:
int ans = int(a * b);
At least that looks a little better on the eyes. But I can't do this in C# it seems.
Upvotes: 2
Views: 420
Reputation: 73301
You should consider the need of your application before the look of the code. Doing float
math to int
, is not something to be taken lightly. The real question what are you looking for out of your final answer.
a is cast to 4, and b is cast to 84, which is the result of 336. However if you cast it to an int after you do the math, the result is 338.
Is being off by 2 good enough for you? Then you have to do
int ans = (int)a * (int)b;
// ans = 336
If you want 338 then you have to do
int ans = (int)(a * b);
// ans = 338
I would really consider the side effects about what you are doing. Ideally you should have a policy for rounding the 2 floats before doing the math. Remember casting to an int is just going to cut a decimal off. So 84.9 becomes 84. That could greatly change your final result. You need to consider what is required in your application.
Upvotes: 9
Reputation: 37668
int ans = (int)a * (int)b;
int ans = (int)(a * b);
These two statements are not equivalent and will produce different results. In once case, you give up precision before the multiplication, in the other after the multiplication.
Upvotes: 2