Reputation: 55
I'm trying to time the performance of my program by using System.currentTimeMillis()
(or alternatively System.nanoTime()
) and I've noticed that every time I run it - it gives a different result for time it took to finish the task.
Even the straightforward test:
long totalTime;
long startTime;
long endTime;
startTime = System.currentTimeMillis();
for (int i = 0; i < 1000000000; i++)
{
for (int j = 0; j < 1000000000; j++)
{
}
}
endTime = System.currentTimeMillis();
totalTime = endTime-startTime;
System.out.println("Time: " + totalTime);
produces all sorts of different outputs, from 0 to 200. Can anyone say what I'm doing wrong or suggest an alternative solution?
Upvotes: 3
Views: 305
Reputation: 198103
This is exactly the expected behavior -- it's supposed to get faster as you rerun the timing. As you rerun a method many times, the JIT devotes more effort to compiling it to native code and optimizing it; I would expect that after running this code for long enough, the JIT would eliminate the loop entirely, since it doesn't actually do anything.
The best and simplest way to get precise benchmarks on Java code is to use a tool like Caliper that "warms up" the JIT to encourage it to optimize your code fully.
Upvotes: 2
Reputation: 533530
The loop doesn't do anything, so you are timing how long it takes to detect the loop is pointless.
Timing the loops more accurately won't help, you need to do something slightly useful to get repeatable results.
I suggest you try -server
if you are running on 32-bit windows.
A billion billion clock cycles takes about 10 years so its not really iterating that many times.
Upvotes: 5