Reputation: 160
Here is the code
struct val {
int x;
int y;
};
struct memory {
struct val *sharedmem;
};
struct val va;
struct memory mem;
void *myThreadFun(void *vargp)
{
va.x = 1;
va.y = 2;
mem.sharedmem = &va;
sleep(1);
printf("Printing GeeksQuiz from Thread0 x = %d y = %d sharedmem = %d\n", va.x, va.y, mem.sharedmem->x);
return NULL;
}
int main()
{
pthread_t tid;
printf("Before Thread \n");
asm volatile ("" ::: "eax", "esi");
pthread_create(&tid, NULL, myThreadFun, NULL);
asm volatile ("mfence" ::: "memory");
printf("sharedmem = %d\n", mem.sharedmem->x); <----- **Here is the fault**
pthread_join(tid, NULL);
printf("After Thread \n");
exit(0);
}
When looking at the dmesg
it shows 40073f
as the fault address, which is
mov eax,DWORD PTR [rax]
I tried adding few general purpose registers to the clobbered registers but still no success. The code works perfectly fine when run under gdb
.
Upvotes: 0
Views: 108
Reputation: 7483
To expand a bit on my comment:
When writing a single threaded application, you can be certain that the earlier parts of your code will (effectively) always be executed before the later parts of your code.
But things get tricky when you have more than one thread. How do you know which order things will happen in then? Will the thread start quickly? Or slowly? Will the main thread keep processing? Or will the OS schedule some other thread to run next allowing the 'new' thread to race ahead?
This type of 'race' between 2 (or more) threads to see who gets to a certain point in the code first is a common problem when writing multi-threaded applications, and is called a 'race condition.' These types of bugs are notoriously difficult to track down.
If you don't take any steps to ensure the ordering of events, you are just 'hoping' they happen in the right order. And given the natural variation of how threads get scheduled, there is going to be some wiggle room. Unless you use some of the OS's synchronization functions to coordinate between the threads.
In this case, the event we are looking at is setting the mem.sharedmem
variable. This is being done in the myThreadFun
thread. Now, will the myThreadFun
thread be able to set the variable before the main
thread tries to access it? Well, it's hard to say for sure. Might be. Might not.
Will running this under the debugger change how fast the two threads run? You bet it does. How? Hard to predict.
However, if you change your code to join
the thread before you try to access the variable, you know for sure that the thread has completed its work and the variable is ready to be used. join
is one of those synchronization methods I was referring to.
Looking at your code, perhaps you assume the mfence
or memory
clobbers will perform this type of synchronization for you? Sorry, but no, it's not that easy.
Learning how to safely synchronize work between multiple threads is a finicky and challenging area of programming. Important, useful, but challenging.
edit1:
One more point I should make is that lfence
, sfence
and mfence
are pretty low-level instructions. People (especially multi-threading newbies) will find that using the atomic functions (which automatically use the appropriate assembler instructions as needed) along with operating system synchronization objects will find their life much easier.
There are lots more tutorials and discussions about atomics than mfence. It also tends to be more portable as well.
Upvotes: 1