aafulei
aafulei

Reputation: 2205

In practice, is there any "lazy" evalution in multiplication with zero in run time

Will the machine or compiler make use of the property that zero multiplied with an integer must always be zero? Consider

int f(int c, int d, int e, int f)
{
    // some complex calculations here...
}

int foo(int a, int b, int c, int d, int e, int f)
{
    return (a - b) * f(c, d, e, f);
}

In run time, if we pass some arguments where a == b, then mathematically there is no need to calculate the result of f() (assuming no undefined or strange behavior). The result must always be 0.

I am not quite sure if there is any possibility that the machine / compiler might use some optimizing technique to skip the calculation of f(). Or, asked the other way round, is it guaranteed that f() will be always called no matter what values of a and b are?

I am tagging this question with both C and C++ to avoid the slight chance that rules differ in C and C++ in this case. If so, please elaborate respectively.

Update Thanks for the discussion. From what I gather up to now, the possible existence of a function's side-effect would certainly be a factor to consider. However, I would like to clarify that the helper function f() is not a must in my intention. Code like

int foo(int a, int b, int c, int d, int e, int f)
{
    return (a - b) * /* some complex expression with c, d, e, f */;
}

would also qualify. Apologies for not making it clear at the beginning.

Upvotes: 0

Views: 402

Answers (3)

mcleod_ideafix
mcleod_ideafix

Reputation: 11428

Unless the compiler can determine, at compile time, that (a - b) will always evaluate to 0, it won't try to add code to perform the evaluation at runtime.

The main reason is what it has been discussed in other answers: the function that provides one of the operands can have side effects, and you don't normally want to avoid them to happen (if you would want it, you would have to add the evaluation by yourself).

The other reason is that the hardware already does that, and multiplications in which one of the operands is 0 normally takes much less cycles than a regular one.

Note that this is different to what happens with short circuit evaluation in conditional expressions: if ( (a - b) == 0 || f(c, d, e, f) == 0 ) . In this case, the first condition may avoid the second one from executing f() at runtime.

Upvotes: 1

Volglizolic
Volglizolic

Reputation: 26

Compiler

The compiler usually (meaning most compilers) optimizes arithmetic calculations if they consist of constants. For example i = 1 + 3; will be optimized to i = 4; but the more complex the calculation, fewer the compilers that will be able to optimize it. Compilers usually work recursively with tree structures and search them to find possible optimizations. So it makes no difference if you add 2 or 20 constants, but it makes a difference if the additions are inside a loop. In this case calling foo(a, a, x, y, z); is a bit less likely to be optimized that calling foo(1, 1, x, y, z);.

If the compiler first inlines small functions, and searches for arithmetic optimizations after, then it is quite likely that if the parameters are determined at compile time, the compiler will be able to optimize out all the extra instructions. After all this is what it boils down to, can the compiler be sure that the result of foo is 0 without running the program?

Two things to note:

  • Compilers can selectively optimize different things (for gcc using -O0, -O1, -O2, -O3 and other more specific commands)

  • Compilers themselves are written programmes and not a magic black box. For the compiler to optimize foo, a developer must write somewhere in there: check if a subtraction is about the same variable and if so substitute the result with 0. And somewhere near that: check if you multiply with zero and substitute that with 0. All that at compile time.

For the compiler to optimize at run time, then instead of multiplying a and b, the produced assembly will contain checks for each variable to check for any one zeros. I don't think any compiler does that.

Processor

The processor is for the most part a dumb machine that does exactly what its told. The multiplication is done by hardware that carries some bitwise logic. For the processor to optimize this multiplication, the circuits that do that calculation must also have a part that says the following: if one of the multiplicants is 0 then the result is 0 and we need not do the multiplication. If someone has programmed the processor to do that, then it is optimized. Depends on implementation but I think it's quite unlikely.

Upvotes: 1

Jake Schmidt
Jake Schmidt

Reputation: 1699

This would require the compiler to generate branching, which it generally doesn't like to do. If you want the branching, make it explicit:

int f(int c, int d, int e, int f)
{
    // some complex calculations here...
}

int foo(int a, int b, int c, int d, int e, int f)
{
    if (a == b) return 0;
    return (a - b) * f(c, d, e, f);
}

As noted in the comments, f may also have side effects, in which case it is guaranteed not to be "optimized" away.

Upvotes: 1

Related Questions