galinette
galinette

Reputation: 9292

funsafe-math-optimizations, same formula on two different lines, different result

I have the following code in a loop:

while(true)
{
    float i1, i2;

    if(y==0)
    {
        i1 = 0;
    }
    else
    {
        //if y==108, this gives 74.821136 (note the two last digits)
        i1 = ((values[y]+values[y+1])-values[1])*0.5f;
    }

    if(y+2==values.size())
    {
        i2 = values[y+1];
    }
    else
    {
        //if y==107, this gives 74.821129 (note the two last digits)
        i2 = ((values[y+1]+values[y+2])-values[1])*0.5f;
    }

    if(i1<=t && t<i2) {
        break;
    }
    else if(t<i1) {
        y--;
    }
    else {
        y++;
    }
}

This loop gets evaluated for y=107, t=74.821133

And for y=108:

As you can see, i2 when y=107 is slightly different from i1 when y=108, while the lines for calculating these two values are identical.

I understand that funsafe-math-optimizations reorganizes math formulas using algebra rules which may lead to numerical errors due finite precision. But here, two equivalent formulas seem optimized differently. Which in this example, lead to an infinite loop (as this function looks, for a given float t, the y value for which i1 <= t < i2 )

Is this a faulty gcc 4.8.0 behavior?

If I create a function:

float getDifValue(y) const { (values[y]+values[y+1])-values[1])*0.5f; }

And then use it in the loop:

    if(y==0)
    {
        i1 = 0;
    }
    else
    {
        i1 = getDifValue(y);
    }

    if(y+2==values.size())
    {
        i2 = values[y+1];
    }
    else
    {
        i2 = getDifValue(y+1);
    }

Am I ensured that i2 for y=107 and i1 for y=108 will produce the same result? Or can the compiler inline getDifValue and optimize it differently on both places?

Thanks

Upvotes: 3

Views: 1257

Answers (3)

galinette
galinette

Reputation: 9292

After looking at disassembly, it seems that funsafe-optimization does change

float i1 = ((values[y]+values[y+1])-values[1])*0.5f;
float i2 = ((values[y+1]+values[y+2])-values[1])*0.5f;

into:

float i1 = ((values[y+1]-values[1])+values[y])*0.5f;
float i2 = ((values[y+1]-values[1])+values[y+2])*0.5f;

As it may then compute (values[y+1]-values[1]) only once.

Then, i2 for y==127 and i1 for y==128 are now computed slightly differently and fpu rounding make the result different.

Writing the calculation as a separate function of y solves the issue. But the question about the problem potentially reappearing if the compiler decides to inline and optimize is still open.

Upvotes: 3

David Schwartz
David Schwartz

Reputation: 182789

Even x=y; if (x==y) ... is not guaranteed to work with these optimizations. It may, for example, wind up comparing a value in a register to a value in memory, and the value in memory may have less precision.

This is possibly what's causing the issue here. In one case, a value could be used from a floating point register and in the other case, there aren't enough registers and a value must be written to memory and then read back. Perhaps i1 stays in the very last available register, but i2 has to go in memory.

Or it could be something else entirely. But it's not unexpected.

Upvotes: 2

maze-cooperation
maze-cooperation

Reputation: 61

It seems to me that you that you compare digits that are beyond machine precision for floating point numbers (which is 1e-7 for float and 1e-16 for double). That means you output more digits than you should. If you were to output the variable in binary representation instead of the numerical values I'd guess they are the same. If you're worried that 1e-7 is not enough, I suggest using doubles.

Upvotes: -1

Related Questions