Masked Man
Masked Man

Reputation: 11075

Does it matter which operand of integer division is static_casted to obtain a float result?

To obtain a float result from division of two ints, we static_cast one of the operands to float, like so:

int a = 2;
int b = 3;
float c = static_cast<float>(a) / b;  // c = 0.666666
float d = a / static_cast<float>(b);  // d = 0.666666

In the above case, it shouldn't matter which operand is static_casted. However, suppose one of the operands is a compile-time constant, while the other is not, like so:

int a = foo();  // value not available at compile-time.
const int b = SOME_CONSTANT;  // compile-time constant.

Does compiler optimization make any difference to the two static_casts, as described below?

float c = static_cast<float>(a) / b;

In this case, compiler can replace b with its value, but since a isn't known, the cast can happen only at runtime.

float d = a / static_cast<float>(b);

However, in this case, compiler knows b, so it could do the casting at compile-time, and replace b directly with the float value.

In both cases, after the casting, an integer/float (or float/integer) division happens at runtime.

Is this intuition correct, or can compilers be smart enough to optimize equally well in both cases? Are there any other factors I have overlooked?

Upvotes: 2

Views: 65

Answers (1)

molbdnilo
molbdnilo

Reputation: 66459

No int/float or float/int divisions happen at runtime. Ever.

Since one operand is being cast - explicitly converted - to float, the other will be implicitly converted to float for the division.

Both your cases are equivalent to

static_cast<float>(a) / static_cast<float>(b);

Upvotes: 5

Related Questions