Synchronization in multiprocessor systems

I understand that synchronization can be easily done using a semaphore in a single core processor. But where we have multi-cores, if multiple processes want to enter the critical section at the same time instant, does all of them enter the critical section or only one wins? The winner process wins for what criteria?

Upvotes: 3

Views: 1326

Answers (3)

YSK
YSK

Reputation: 1614

Even if multiple cores are present, a semaphore (or a mutex, or most other synchronization primitives) works just the same - only the specified number of threads can enter the semaphore. It would be a poor semaphore indeed if it only worked on single-processor machines!

There are multiple mechanisms required to make this work, and I'll try to give a high-level view.

Note that the memory is still shared between the different cores. A simplified but IMO useful way of understanding how to synchronize cores using shared memory is the CMPXCHG instruction. This instruction can atomically (see more details below) compare and set a memory address. It also sets the zero flag to 1 if the memory address had the value you were comparing to.

Consider the following code:

wait:
mov eax, 0
mov ecx, 1
lock cmpxchg [address of lock], ecx
jne wait
// We now own the lock

The code logically loops doing the following: set the value of lock to 1 only if lock is 0.

This code can be run by multiple cores, and the atomicity of cmpxchg guarantees that only one core will win.

The situation gets more complicated if each core has its own cache (as is usually the case nowadays). With individual caches, each core has its own view of the memory so care must be taken to ensure that those memory views are consistent. The short answer is that this can be done by having the caches notify each other when data is changed, so that other caches can invalidate or update their copy if needed. Look up snooping and the MESI protocol for more details on this.

Note that if the cores are on the same physical chip then they're all competing for the memory bus and there are mechanisms for sharing it between the cores (e.g. arbitration mechanisms; also look up the LOCK instruction).

Upvotes: 1

prl
prl

Reputation: 12435

When two cores try to enter the critical section at the same time, they both try to write to the semaphore in memory at the same time, using a locked read-modify-write operation. In order for a core to complete the write, the cache has to gain Exclusive access to the cache line containing the semaphore. This forces the other core to mark the line as Invalid. The caching protocol ensures that only one core can gain Exclusive access, and that core enters the critical section.

Meanwhile, the other core, which is also trying to write to the semaphore, has to wait, because it still needs exclusive access to the cache line. As soon as the first core finishes its write operation, the other core gets Exclusive access and can complete its read-modify-write. But the result of the read-modify-write tells it that the semaphore is busy, so it cannot enter the critical section until it detects that the semaphore has been released.

Upvotes: 2

NSKBpro
NSKBpro

Reputation: 383

Semaphore is just one of way to make signalisation over threads in one system. You can use semaphore in one or multi core CPU that's not affect on his usage.

Now lets back on your question. If you have critical section and multiple threads want's to go in that area, they will go all in that area. You need to understand that main thread (for example) or some other thread which will start them, makes some time-spacing between them (very very small time space around couple ns). So that's why we use signalisation because we don't want "winner" and in other hand in almost every case threads can make unwanted changes in that critical section.

In a single core system you can only achieve a just-concurrent process scheduling ( a fake parallelism, via a TaskScheduler ) since different treads must share the core through allocated time slots.

Upvotes: 0

Related Questions