CuriousMind
CuriousMind

Reputation: 8953

How atomic Operations work and how thread can't be preempted. Is it OS guarantee or JVM guarantee?

I am trying to understand how atomic operations work, particularly in Java.

Take AtomicInteger. The document says it is: "An int value that may be updated atomically." For example, one of the operations which is atomic for this class is:

/**
 * Atomically sets to the given value and returns the old value.
 *
 * @param newValue the new value
 * @return the previous value
 */
public final int getAndSet(int newValue) {
    return unsafe.getAndSetInt(this, valueOffset, newValue);
}

As per documentation, it is guaranteed that this would be an atomic operation. However, the actual method unsafe.getAndSetInt() would have at least a few lines to execute. How then is atomicity guaranteed?

For example, if Thread-A is currently executing this code, why can't this be preempted? As I understand it is OS's scheduler which will share the timeslice among other threads, how it gets decided if a Thread is executing some atomic method, then all instructions need to get executed.

Is this arrangement done at the OS level? Is there a contract between JVM, API Call, and OS that if a Thread executing someFoo() method (assuming atomic), then it is atomic, and needs to be completed by that Thread and without being preempted?

Upvotes: 3

Views: 591

Answers (1)

supercat
supercat

Reputation: 81347

On some architectures, there are instructions that are guaranteed to atomically read an old value and write a new one. Others use a pair of operations called 'load linked/conditional store' (ll/cs). A load-linked operation will load a value and configure hardware to watch for whether anything happens that might disturb it. The conditional store will write the value only if nothing has happened since the last load-linked that might have disturbed the value written, and if will indicate (e.g. by setting a register to 0 or 1) whether the store succeeded. If e.g. conditionalStore returns 1 when it succeeded, an exchange operation could be implemented as:

int exchange(int *ptr, int newValue)
{
  int oldValue;
  do
  {
    oldValue = loadLinked(ptr);
  } while (!conditionalStore(ptr, newValue);
  return oldValue;
}

If anything disturbs the indicated address between the linked load and conditional store, the conditional store will have no side effects other than returning zero. If that occurs, the loop will repeat the linked load and store conditional until the two operations manage to occur consecutively.

Note that many architectures guarantee that a store conditional will always report failure if a location was disturbed, but they generally don't promise to always report success when it hasn't. Depending upon the architecture, there might be many or few false positives, and the efficiency of an operation like the exchange may be affected by how many there are. For example, an implementation might cause a conditional store to fail any time another thread writes anything into the same cache line as the object being watched, even if that other thread writes a different object within that cache line. This could potentially degrade performance, but programs collectively could still make progress if the only thing that could cause a conditional store to fail would be a successful store on another thread. If the effect of two threads each doing a load linked and then each doing a conditional store would be that both stores would fail, it would be possible for threads to live-lock. That could be mitigated by adding random delays, but in most usage scenarios that wouldn't be necessary.

Upvotes: 2

Related Questions