Kheng Boon Pek
Kheng Boon Pek

Reputation: 41

Accuracy of System.nanoTime() to measure time elapsed decreases after a call to Thread.sleep()

I'm encountering a really unusual issue here. It seems that the calling of Thread.sleep(n), where n > 0 would cause the following System.nanoTime() calls to be less predictable.

The code below demonstrates the issue.

Running it on my computer (rMBP 15" 2015, OS X 10.11, jre 1.8.0_40-b26) outputs the following result:

Control: 48497
Random: 36719
Thread.sleep(0): 48044
Thread.sleep(1): 832271

On a Virtual Machine running Windows 8 (VMware Horizon, Windows 8.1, are 1.8.0_60-b27):

Control: 98974
Random: 61019
Thread.sleep(0): 115623
Thread.sleep(1): 282451

However, running it on an enterprise server (VMware, RHEL 6.7, jre 1.6.0_45-b06):

Control: 1385670
Random: 1202695
Thread.sleep(0): 1393994
Thread.sleep(1): 1413220

Which is surprisingly the result I expect.

Clearly the Thread.sleep(1) affects the computation of the below code. I have no idea why this happens. Does anyone have a clue?

Thanks!

public class Main {
    public static void main(String[] args) {
        int N = 1000;
        long timeElapsed = 0;
        long startTime, endTime = 0;

        for (int i = 0; i < N; i++) {
            startTime = System.nanoTime();
            //search runs here
            endTime = System.nanoTime();

            timeElapsed += endTime - startTime;
        }

        System.out.println("Control: " + timeElapsed);

        timeElapsed = 0;

        for (int i = 0; i < N; i++) {
            startTime = System.nanoTime();
            //search runs here
            endTime = System.nanoTime();

            timeElapsed += endTime - startTime;

            for (int j = 0; j < N; j++) {
                int k = (int) Math.pow(i, j);
            }
        }

        System.out.println("Random: " + timeElapsed);

        timeElapsed = 0;

        for (int i = 0; i < N; i++) {
            startTime = System.nanoTime();
            //search runs here
            endTime = System.nanoTime();

            timeElapsed += endTime - startTime;

            try {
                Thread.sleep(0);
            } catch (InterruptedException e) {
                break;
            }
        }

        System.out.println("Thread.sleep(0): " + timeElapsed);

        timeElapsed = 0;

        for (int i = 0; i < N; i++) {
            startTime = System.nanoTime();
            //search runs here
            endTime = System.nanoTime();

            timeElapsed += endTime - startTime;

            try {
                Thread.sleep(2);
            } catch (InterruptedException e) {
                break;
            }
        }

        System.out.println("Thread.sleep(1): " + timeElapsed);
    }
}

Basically I'm running a search within a while-loop which takes a break every iteration by calling Thread.sleep(). I want to exclude the sleep time from the overall time taken to run the search, so I'm using System.nanoTime() to record the start and finishing times. However, as you notice above, this doesn't work well.

Is there a way to remedy this?

Thanks for any input!

Upvotes: 4

Views: 1103

Answers (4)

Krzysztof Tomaszewski
Krzysztof Tomaszewski

Reputation: 1174

This behaviour may be not related to Java. See:

I think a very similar issue is reported here: Why does Sleep() slow down subsequent code for 40ms? - and there are some answers.

Upvotes: 0

the8472
the8472

Reputation: 43150

This is a complex topic because the timers used by the JVM are highly CPU- and OS-dependent and also change with JVM versions (e.g. by using newer OS APIs). Virtual machines may also limit the CPU capabilities they pass through to guests, which may alter the choices in comparison to a bare metal setup.

On x86 the RDTSC instruction provides the lowest latency and best granularity of all clocks, but under some configurations it's not available or reliable enough as a time source.

On linux you should check kernel startup messages (dmesg), the tsc-related /proc/cpuinfo flags and the selected /sys/devices/system/clocksource/*/current_clocksource. The kernel will try to use TSC by default, if it doesn't there may be a reason for that.

For some history you may want to read the following, but note that some of those articles may be a bit dated, TSC reliability has improved a lot over the years:

Upvotes: 3

apangin
apangin

Reputation: 98640

I can suggest at least two possible reasons of such behavior:

  1. Power saving. When executing a busy loop, CPU runs at its maximum performance state. However, after Thread.sleep it is likely to fall into one of power-saving states, with frequency and voltage reduced. After than CPU won't return to its maximum performance immediately, this may take from several nanoseconds to microseconds.
  2. Scheduling. After a thread is descheduled due to Thread.sleep, it will be scheduled for execution again after a timer event which might be related to the timer used for System.nanoTime.

In both cases you can't directly work around this - I mean Thread.sleep will also affect timings in your real application. But if the amount of useful work measured is large enough, the inaccuracy will be negligible.

Upvotes: 2

CosmicGiant
CosmicGiant

Reputation: 6439

The inconsistencies probably arise not from Java, but from the different OSs and VMs "atomic-" or system- clocks themselves.

According to the official .nanoTime() documentation:

no guarantees are made except that the resolution is at least as good as that of currentTimeMillis()

source

...I can tell from personal knowledge that this is because in some OSs and VMs, the system itself doesn't support "atomic" clocks, which are necessary for higher resolutions. (I will post the link to source this information as soon as I find it again...It's been a long time.)

Upvotes: 0

Related Questions