Reputation: 42464
If I am using ReaderWriterLockSlim
to acquire read/write locks, do I need to make my variables volatile
or use Interlocked.Increment
?
For example, would the code in the Add
method below work fine, or does it need enhancement?
public class AppendableList<T> { // semi-immutable; supports appending only
private T[] data = new T[16];
private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
public int Count { get; private set; }
public T this[int index] {
get {
rwLock.EnterReadLock();
try { return data[index]; } finally { rwLock.ExitReadLock(); }
}
}
public void Add(T item) {
rwLock.EnterUpgradeableReadLock();
try {
if (Count == data.Length)
reAllocateArray(); // upgrades to write lock
data[Count++] = item; // do I need to use Interlocked here?
} finally { rwLock.ExitUpgradeableReadLock(); }
}
}
EDIT: I'm trying to write a light-weight, fast and simple list that allows multiple threads to access its data concurrently (sort of producer-consumer buffer). I have edited the code above removing the simplifications I used before, so the issue should be clearer now. It seems to me that this code is thread-safe, but I am not sure whether all threads will see the updated value of Count
right after exiting the upgradeable lock.
EDIT 2: The "Write" lock here is used to indicate writing to the array reference, not the array elements. I'm assuming this is sufficient (since the data itself is immutable). I guess that I need to use Interlocked when incrementing Count
. Is that true?
Upvotes: 2
Views: 1957
Reputation: 5965
Yes it does, for a very in-depth overview of the various memory barrier cases, check that document out, you can find the non-fence'd lock's if you want also.
And please, DO NOT USE VOLATILE IT IS LESS AND LESS EFFECTIVE THESE DAYS!!
All of the standard Windows locking mechanisms (spin locks, mutexes, kernel events, and resources managed by the executive resource package) protect against processor reordering by inserting memory barriers where required in executable code.
A memory barrier is a processor instruction that preserves the ordering of read and/or write operations from the perspective of any other processor. Memory barriers include processor instructions with acquire, release, and fence semantics. These semantics describe the order in which results of an operation become visible.
- Acquire semantics mean that the results of the operation are visible before the results of any operation that appears after it in code.
- Release semantics mean that the results of the operation are visible after the results of any operation that appears before it in code.
- Fence semantics combine acquire and release semantics. The results of an operation with fence semantics are visible before those of any operation that appears after it in code and after those of any operation that appears before it.
Upvotes: 1
Reputation: 1064184
I would fully expect the write-lock to function as a memory barrier (within the write-lock, in particular), but I can't prove it off-hand.
Whether you need the complexity of ReaderWriterLockSlim
depends on the context; Interlocked
, volatile
, lock
or [MethodImpl]
might each do the job far more simply. You mainly need ReaderWriterLock[Slim]
if you have lots of readers and few writers.
However, the get
is not currently protected by the lock; you'll need to us an explicit property implementation and take out a read lock yourself if you ever need multiple operations to be spanned by the write-lock (without readers seeing intermediate values).
As an aside, the Count++
usage would probably be more familiar to people.
You should also use try
/finally
to ensure you release the lock on exception.
To avoid the issue of taking write then read locks, perhaps:
private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
private int count;
public int Count {
get {
rwLock.EnterReadLock();
int tmp = count;
rwLock.ExitReadLock();
return tmp;
}
}
public void Add(object x) {
rwLock.EnterWriteLock();
try {
// do some processing
count++;
} finally {
rwLock.ExitWriteLock();
}
}
Updated re your edit;
That looks pretty solid. A List<T>
would be my recommendation (over a T[]
array), since it will do all the doubling etc internally, saving you lots of code. Since only one thread can update Count
at a time, there is no need for Interlocked
, and this property saves the need for a lock when reading Count
, as long as you're fine for callers to get the old Count
while another thread is adding rows (rather than being blocked).
Upvotes: 3