Reputation: 1001
So, I know that many C-style languages have decrement (--
) and increment (++
) operators, and allow the mutation to happen before or after the whole expression is evaluated.
What happens when a post-increment happens on a return? I'm not asking in terms of behaviour, but rather implementation.
Given a virtual machine (e.g. JavaScript/JVM) or physical machine (e.g. compiled C++), are the generated opcodes something like the following? (Assuming stack-based arguments/returns.)
int x = 4, y = 8;
return f(++a) + y++;
Turns into this: (Maybe?)
LOAD 4 A
LOAD 8 B
INC A
PUSH A
CALL F
POP INTO C
ADD C BY B
INC B
RET C
If so, how do such operations in these languages decide where to embed the incrementation when the expression becomes complex, perhaps even a little Lispish?
Upvotes: 0
Views: 316
Reputation: 279195
Once the code has been optimized (either by a C++ compiler or by a JIT), I would expect something similar to the following:
Source:
int x = 4, y = 8;
return f(++x) + y++;
Instructions:
PUSH 5
CALL f
POP INTO A
ADD 8 to A
RET A
The instructions I've listed are correct (unless I've made some silly error), and they only require code transformations that I know optimizers to be capable of making. Basically, unused results are not calculated and operations with known results are computed at optimization time.
It's quite possible that on a particular architecture there's a more efficient way to do it, though, in which case there's a good chance that the optimizer will know about it and use it. Optimizers often beat my expectations, since I'm not an expert assembly programmer. In this case, it's quite likely that the calling convention dictates the same location for the return value in the case of function f
and this code. So there might be no need for any POP -- the return register/stack location can just have 8 added to it. Furthermore, if the function f
is inlined then optimization will be applied after inlining.
So for example if f
returns input * 2
then the whole function might optimize to:
RET 18
Upvotes: 2
Reputation: 31057
You can see exactly what the compiler generates using javap
. For example:
int x = 4, y = 8;
return f(++x) + y++;
was compiled to this sequence of bytecode:
0: iconst_4
1: istore_1
2: bipush 8
4: istore_2
5: aload_0
6: iinc 1, 1
9: iload_1
10: invokevirtual #2; //Method f:(I)I
13: iload_2
14: iinc 2, 1
17: iadd
18: ireturn
Of course, it's up to the JVM how to turn this into assembler - see Disassemble Java JIT compiled native bytecode for how you can see the results in OpenJDK 7.
Upvotes: 1
Reputation: 200138
Java runtime is far removed from your source code, and even from bytecode. Once a method is JIT-compiled, the resulting machine code is aggressively optimized. But more importantly, the details of this optimization are way beyond any specification. If it interests you, you may dive deep down into an implementation like HotSpot, but all you learn from it will be specific to the platform, version, build number, JVM startup arguments, and even individual run of the JVM.
Upvotes: 3