RustamIS
RustamIS

Reputation: 697

Why volatile is working faster than non-volatile?

After reading question Why is processing a sorted array faster than an unsorted array? We had tried to make variables as volatile (I expected that, when I use volatile it must be working slower, but it's working faster) Here is my code without volatile: (It is working about 11 sec.)

import java.util.Arrays;
import java.util.Random;

public class GGGG {

public static void main(String[] args) {
    int arraySize = 32768;
    int data[];
    data = new int[arraySize];

    Random rnd = new Random(0);
    for (int c = 0; c < arraySize; ++c) {
        data[c] = rnd.nextInt() % 256;
    }

    Arrays.sort(data);

    long start = System.nanoTime();
    long sum = 0;

    for (int i = 0; i < 200000; ++i) {
        for (int c = 0; c < arraySize; ++c) {
            if (data[c] >= 128) {
                sum += data[c];
            }
        }
    }

    System.out.println((System.nanoTime() - start) / 1000000000.0);
    System.out.println("sum = " + sum);

    System.out.println("=========================");
}

And output is:

10.876173341
sum = 310368400000
=========================



And this is when I use arraySize and data variables as volatile, and it is working about 7 seconds:

import java.util.Arrays;
import java.util.Random;

public class GGGG {

static volatile int arraySize = 32768;
static volatile int data[];

public static void main(String[] args) {
    data = new int[arraySize];

    Random rnd = new Random(0);
    for (int c = 0; c < arraySize; ++c) {
        data[c] = rnd.nextInt() % 256;
    }

    Arrays.sort(data);

    long start = System.nanoTime();
    long sum = 0;

    for (int i = 0; i < 200000; ++i) {
        for (int c = 0; c < arraySize; ++c) {
            if (data[c] >= 128) {
                sum += data[c];
            }
        }
    }

    System.out.println((System.nanoTime() - start) / 1000000000.0);
    System.out.println("sum = " + sum);

    System.out.println("=========================");
}

And output with volatile is:

6.776267265
sum = 310368400000
=========================

All I was expecting to slow down the process with volatile, but it's working faster. What's happened?

Upvotes: 1

Views: 512

Answers (1)

Marko Topolnik
Marko Topolnik

Reputation: 200206

I'll name just two main issues with your code:

  1. there's no warmup;
  2. everything happens in the main method, therefore JIT-compiled code can be run only by On-Stack Replacement.

Redoing your case with the jmh tool, I get the times just as expected.

@OutputTimeUnit(TimeUnit.MICROSECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 2)
@Measurement(iterations = 5, time = 3)
@State(Scope.Thread)
@Threads(1)
@Fork(2)
public class Writing
{
  static final int ARRAY_SIZE = 32768;

  int data[] = new int[ARRAY_SIZE];
  volatile int volatileData[] = new int[ARRAY_SIZE];

  @Setup public void setup() {
    Random rnd = new Random(0);
    for (int c = 0; c < ARRAY_SIZE; ++c) {
      data[c] = rnd.nextInt() % 256;
      volatileData[c] = rnd.nextInt() % 256;
    }
    Arrays.sort(data);
    System.arraycopy(data, 0, volatileData, 0, ARRAY_SIZE);
  }

  @GenerateMicroBenchmark
  public long sum() {
    long sum = 0;
    for (int c = 0; c < ARRAY_SIZE; ++c) if (data[c] >= 128) sum += data[c];
    return sum;
  }

  @GenerateMicroBenchmark
  public long volatileSum() {
    long sum = 0;
    for (int c = 0; c < ARRAY_SIZE; ++c) if (volatileData[c] >= 128) sum += volatileData[c];
    return sum;
  }
}

These are the results:

Benchmark       Mode   Samples         Mean   Mean error    Units
sum             avgt        10       21.956        0.221    us/op
volatileSum     avgt        10       40.561        0.264    us/op

Upvotes: 8

Related Questions