Thorsten
Thorsten

Reputation: 213

Why does int/float multiplication lead to different results?

If I multiply a float and and integer like below, why does all multiplications lead to a differnt result? My expectation was a consistent result. I thought in both cases the int value gets implicitly converted to a float before multiplication. But there seems to be a difference. What is the reason for this differnt handling?

int multiply(float val, int multiplier)
{
    return val * multiplier;
}

int multiply2(float val, int multiplier)
{
    return float(val * multiplier);
}

float val = 1.3f;

int result0 = val * int(10); // 12
int result1 = 1.3f * int(10); // 13
int result3 = multiply(1.3f, 10); //12 
int result4 = multiply2(1.3f, 10); // 13

Thank you Thorsten

Upvotes: 9

Views: 858

Answers (1)

PlasmaHH
PlasmaHH

Reputation: 16046

What likely happens for you is:

Assuming IEEE or similar floats, 1.3 can not be represented, and likely is something like 1.299999 which multiplied by 10 is 12.99999 which then truncated to int is 12.

However 1.3 * 10 can be evaluated at compile time, leading most likely to an accurate representation of 13.

Depending on how your code is actually structured, what compiler is used, and which settings it is used with, it could evaluate either one to 12 or 13, depending on whether it does this at run, or compile time.

For completeness, with the following code, I could reproduce it:

extern int result0;
extern int result1;

float val = 1.3f;

void foo( )
{   
 result0 = val * int(10); // 12
 result1 = 1.3f * int(10); // 13
}

Upvotes: 10

Related Questions