Unable to understand compilers' main optimizations

Martinus gives a good example of a code where compiler optimizes the code at run-time by calculating out multiplication:

Martinus code

int x = 0;
for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
    x += x + x + x + x + x;
}
System.out.println(x);

His code after Constant Folding -compiler's optimization at compile-time (Thanks to Abelenky for pointing that out)

int x = 0;
for (int i = 0; i < 100000000000; ++i) {
    x += x + x + x + x + x;
}
System.out.println(x);

This optimization technique seems to be a trivial in my opinion. I guess that it may one of the techniques Sun started to keep trivial recently.

I am interested in two types of optimizations made by compilers:

  1. optimizations which were omitted in today's compilers as trivial such as in Java's compiler at run-time
  2. optimizations which are used by the majority of today's compilers

Please, put each optimization technique to a separate answer.

Which techniques have compilers used in 90s (1) and today (2)?

Upvotes: 1

Views: 386

Answers (4)

Mike Dunlavey
Mike Dunlavey

Reputation: 40699

Compiler books should provide a pretty good resource.

If this is obvious, please ignore it, but you're asking about low-level optimizations, the only ones that compilers can do. In non-toy programs, high-level optimizations are far more productive, but only the programmer can do them.

Upvotes: 0

abelenky
abelenky

Reputation: 64730

The optimization shown in your example, of collapsing 100*1000*1000*1000 => 100000000000 is NOT a run-time optimization. It happens at compile-time. (and I wouldn't even call it an optimization)

I don't know of any optimizations that happen at run-time, unless you count VM engines that have JIT (just-in-time) compiling.

Optimizations that happen at compile-time are wide ranging, and frequently not simple to explain. But they can include in-lining small functions, re-arranging instructions for cache-locality, re-arranging instructions for better pipelining or hyperthreading, and many, many other techniques.

EDIT: Some F*ER edited my post... and then down-voted it. My original post clearly indicated that collapsing multiplication happens at COMPILE TIME, not RUN TIME, as the poster suggested. Then I mentioned I don't really consider collapsing constants to be much of an optimization. The pre-processor even does it.

Masi: if you want to answer the question, then answer the question. Do NOT edit other people's answers to put in words they never wrote.

Upvotes: 1

Corbin March
Corbin March

Reputation: 25734

How about loop unrolling?:

for (i = 0; i < 100; i++)
    g ();

To:

for (i = 0; i < 100; i += 2)
{
    g ();
    g ();
}

From http://www.compileroptimizations.com/. They have many more - too many for an answer per technique.

Check out Trace Trees for a cool interpreter/just-in-time optimization.

Upvotes: 1

Pod
Pod

Reputation: 4129

Just buy the latest edition of the Dragon Book.

Upvotes: 4

Related Questions