wilmaed
wilmaed

Reputation: 41

Why was arg = args[n++] more efficient than 2 seperate statements in earlier compilers?

From the Book "Core Java for the Impatient", Chapter "increment and decrement operators"

String arg = args[n++];

sets arg to args[n], and then increments n. This made sense thirty years ago when compilers didn’t do a good job optimizing code. Nowadays, there is no performance drawback in using two separate statements, and many programmers find the explicit form easier to read.

I thought such usage of increment and decrement operators was only used in order to write less code, but according to this quote it wasn't so in the past.

What was the performance benefit of writing statements such as String arg = args[n++]?

Upvotes: 2

Views: 184

Answers (3)

Witold Kaczurba
Witold Kaczurba

Reputation: 10515

Over years architectures and compilers became better. Given the improvements in architectures of CPUs and compilers I would say there is no single answer to it.

From the architecuture standpoint - many processors support STORE & POINTER AUTO-INCREMENT as a one CPU cycle. So in the past - the way you wrote the code would impact the result (one vs more operations). Most notably DSP architectures were good at paralleling things (e.g. TI DSPs like C54xx with post-increment and post-decrement instructions and instructions that you can execute in circular buffers - e.g. *"ADD *AR2+, AR2–, A ;after accessing the operands, AR2 ;is incremented by one." - from TMS320C54x DSP reference set). ARM cores also feature instructions that allows for similar parallelism (VLDR, VSTR instructions - see documentation )

From the compiler standpoint - Compiler looks at how variable is used in its scope (what could not be the the case before). It can see if the variable is reused later or not. It might be the case that in the code a variable is increased but then discarded. What is the point of doing that?Nowadays compiler has to track variable usage and it can make smart decisions based on that (if you look at Java 8 - the compiler must be able to spot "effectively final" variables that are not reassigned).

Upvotes: 1

user149341
user149341

Reputation:

Some processors, like the Motorola 68000, support addressing modes that specifically dereference a pointer, then increment it. For instance:

excerpt from the MC68000 Programmer's Reference Manual

Older compilers might conceivably be able to use this addressing mode on an expression like *p++ or arr[i++], but might not be able to recognize it split across two statements.

Upvotes: 2

sanrnsam7
sanrnsam7

Reputation: 171

These operators were/are generally used for convenience by programmers rather than to achieve performance. Because effectively, the statement would get split into a two line statement during compilation!! Apparently, the overhead for performing Post/Pre-increment/decrement operators would be more as compared to an already split two liner statement!

Upvotes: 0

Related Questions