Reputation: 1894
I am reading Programming With POSIX Threads (by David Butenhof), and he mentions by using pthread library:
Whatever memory values a thread can see when it unlocks a mutex, either directly or by waiting on a condition variable, can also be seen by any thread that later locks the same mutex. Again, data written after the mutex is unlocked may not necessarily be seen by the thread that locks the mutex, even if the write occurs before the lock.
All a sudden, I wonder whether the following code is valid:
Thread A:
strcpy(buffer, "hello world");
pthread_spin_lock(&lock); // assuming the mutex in the statement above can be interchanged with spinlock. I don't see why it can't
pthread_spin_unlock(&lock);
Thread B:
pthread_spin_lock(&lock);
pthread_spin_unlock(&lock);
// read buffer; assuming thread B has a copy of the pointer somehow
My question is: can thread B see "hello world" in buffer? Based on his statement, it should. I understand the "usual" way is to protect the shared "resource" by lock. But let's assume strcpy() happens at a random time and it can only happen once in the life time of the program, and let's assume thread B somehow calls pthread_spin_lock() after thread A calls pthread_spin_unlock() :)
Side question: is there a faster way of making the change to buffer visible to other threads? Let's say portability isn't an issue and I am on CentOS. One alternative I can think of is using mmap() but not sure whether any change is global visible without using pthread library.
Upvotes: 5
Views: 884
Reputation: 25599
I don't think you've understood this correctly: the locks do not magically transmit the data, they are a form of communication between threads that allows you move data safely.
In your question, you make a lot of assumptions. In fact everything that you write would equally true if the locks didn't exist, as long as your assumptions all hold. The problem with concurrent programming is that, in general, you cannot make assumptions like that.
If thread A makes a change to memory then it becomes visible to thread B instantly (allowing for the vagaries of caching and compiler optimizations). This is always true, with or without locks. But, without the locks, there are no guarantees that the write is complete yet, or even begun.
The quote you have first assumes (requires) that you only write to shared data with a lock set. The last part tells you what bad things can happen if you try to write without locking first.
I'm not sure what "even if the write occurs before the lock" means, precisely, but it's probably referring to the various race conditions and memory caching effects that plague concurrent programs. The locks may not actually transmit the data, but they will be coded such that they force the compiler to synchronise memory across the call (a "memory barrier").
Upvotes: 1