user577304
user577304

Reputation:

Java Optimization Questions

A lot of questions popped up while I was looking at my code. I would like to know what people think about some code snippets. For each one, I am wondering what is more optimized, and what is better practice or cleaner. Most likely, any possible optimization is on the scale of nanoseconds, I am just wondering theoretically. :)

1)The first one is a frequently called routine that needs to report an error. Right now it looks like this

if(success)
 //Stuff
else
 reportInternalError();

Is any benefit to changing it to

if(success) {
 //Stuff
 return;
}
reportInternalError();

2)reportInternalError() is supposed to only report the error the first time it happens, and it can potentially be called very frequently. Currently, it is like this

if(!reported) {
 //Report error
 reported = true;
}

I am thinking about changing it to

if(reported)
 return;
//Report error
reported = true;

3)The last one is about duplicated code. I am using the exact same code twice, and I am trying to figure out a better way of writing it.

try {
 //Stuff
} catch(){
}

while(loop) {
 try {
     //Same Stuff
 } catch(){
     loop = false;
 }
}

Versus

first = true;
do {
 try {
     //Stuff
 } catch() {
     if(!first)
         loop = false;
 }
 first = false;
} while(loop);

Thanks!

Upvotes: 1

Views: 429

Answers (6)

Bert F
Bert F

Reputation: 87533

I would put a premium on readability and maintainability over micro-optimizations. The gains from micro-optimization are really negligent (and often nil given the sophistication of today's compilers, JITs, and processors).

My choices:

1) I try to put the normal case along the main code path and the exception cases in the branches:

if (! success) {
    reportInternalError();
}

// Stuff

2) I favor the original approach. The other way assumes the method will always to just want to return, but what if it needs to do more stuff in the future? I don't want that extra stuff to go in the branch:

if (! reported) {
    // Report error
    reported = true;
}
// extra stuff to do

3) The refactored version seems more maintainable to me. I'm on the fence about replacing targetExceptionWasThrown with a simple break:

boolean isFirstIteraton = true;
boolean targetExceptionWasThrown = false;
do {
    try {
        //Stuff
    } catch() {
        if (! isFirstIteraton) {
            targetExceptionWasThrown = true;
        }
    }
    isFirstIteraton = false;
} while (! targetExceptionWasThrown);

Upvotes: 2

Mihai Toader
Mihai Toader

Reputation: 12243

I agree with the other answers that you shouldn't try to outsmart the compiler for performance and instead focus on readability. What I use when i have the the case 1 and 2 is this:

case 1:

if( ! success ) {
  reportInternalError();
  return;
}

// do Stuff

and for case 2:

if(reported)
 return;

//Report error
reported = true;

Basically is the guard clause pattern. http://www.c2.com/cgi/wiki?GuardClause. This works properly when you want to make sure some invariant is met before running your code. I think is ok to break the no multiple returns rule in this case. And it make my code easier to read because i can check at a glance what conditions should be met before running the code.

Regarding #3 i would write like this:

    try {

        boolean finished = false;
        do {

           //Same Stuff

           if ( /* end of loop test */ ) {
               finished = true;
           }
        } while ( ! finished );
   } catch() {
      // handle exception       
   }

Upvotes: 1

Fred Foo
Fred Foo

Reputation: 363547

Both theoretically and practically, your compiler should handle this kind of micro-optimization, not you. Focus on clarity instead and stick to the single-return idiom: every function/method should have one return statement, at the end.

In the third example, your intention seems to be to execute the try/catch block at least twice. Try

for (int i=0; i<2 || loop; i++) {
    // put the try/catch block here
}

Upvotes: 1

polerto
polerto

Reputation: 1790

unless you are writing code for very low level (i.e. to a special microprocessor in assembly language), such improvements would only have (if any) tiny effect on performance. I agree that the readability is more crucial in high level.

Upvotes: 1

Nikita Rybak
Nikita Rybak

Reputation: 68006

Regarding #3, exception will 'exit' the loop for you, just put try-catch outside the loop.

try {
 //Stuff
} catch(){
}

 try {
   while(true) {
     //Same Stuff
   }
 } catch(){
 }

But since you have some amount of duplication, employing break in the second example might be better (instead of loop var).

first = true;
while (true) {
 try {
     //Stuff
 } catch() {
     if(!first)
         break;
 }
 first = false;
};

Otherwise, you could create a separate method for //Stuff. It would save clarity of the first example, but remove duplication.

Upvotes: 2

jqno
jqno

Reputation: 15520

Don't worry about it. At this day and age, computer power is a lot cheaper than brain power, so optimize your code for readability, not for maybe gaining a few microseconds here and there.

Besides, the JVM's JIT compiler will most likely do these optimizations for you. And even if it doesn't, it has become so complex that modifications that look like performance gains, may actually slow you program down. Josh Bloch gave an excellent presentation on this subject at Devoxx last year.

Upvotes: 4

Related Questions