Reputation: 1324
We have a spring boot service that simply provides data from a map. The map is updated regularly, triggered by a scheduler, meaning we build a new intermediate map loading all the data needed and as soon as it is finished we assign it. To overcome concurrency issues we introduced a ReentrantReadWriteLock that opens a write lock just in the moment the assignment of the intermediate map happens and of course read locks while accessing the map. Please see simplified code below
@Service
public class MyService {
private final Lock readLock;
private final Lock writeLock;
private Map<String, SomeObject> myMap = new HashMap<>();
public MyService() {
final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
readLock = rwLock.readLock();
writeLock = rwLock.writeLock();
}
protected SomeObject getSomeObject(String key) {
readLock.lock();
try {
return this.myMap.get(key);
}
} finally {
readLock.unlock();
}
return null;
}
private void loadData() {
Map<String, SomeObject> intermediateMyMap = new HashMap<>();
// Now do some heavy processing till the new data is loaded to intermediateMyMap
//clear maps
writeLock.lock();
try {
myMap = intermediateMyMap;
} finally {
writeLock.unlock();
}
}
}
If we set the service under load accessing the map a lot we still saw the java.util.ConcurrentModificationException happening in the logs and I have no clue why.
BTW: Meanwhile I also saw this question, which seems also to be a solution. Nevertheless, I would like to know what I did wrong or if I misunderstood the concept of ReentrantReadWriteLock
EDIT: Today I was provided with the full stacktrace. As argued by some of you guys, the issue is really not related to this piece of code, it just happened coincidently in the same time the reload happened. The problem actually was really in the access to getSomeObject(). In the real code SomeObject is again a Map and this inner List gets sorted each time it is accessed (which is bad anyways, but that is another issue). So basically we ran into this issue
Upvotes: 0
Views: 670
Reputation: 324
ReentrantReadWriteLock only guarantees the thread that holds the lock on the map can hold on to the lock if needed.
It does not guarantee myMap
has not been cached behind the scenes.
A cached value could result in a stale read.
A stale read will give you the java.util.ConcurrentModificationException
myMap needs to be declared volatile
to make the update visible to other threads.
From Java Concurrency in Practice:
volatile variables, to ensure that updates to a variable are propagated predictably to other threads. When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
Peierls, Tim. Java Concurrency in Practice
an alternative would be to use syncronized
on getSomeObject
and a synchonized block on this
around myMap = intermediateMyMap;
Upvotes: 1
Reputation: 34014
I see nothing obviously wrong with the code. ReadWriteLock
should provide the necessary memory ordering guarantees (See Memory Synchronization section at https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/Lock.html)
The problem might well be in the "heavy processing" part. A ConcurrentModificationException
could also be caused by modifying the map while iterating over it from a single thread, but then you would see the same problem regardless of the load on the system.
As you already mentioned, for this pattern of replacing the whole map I think a volatile field or an AtomicReference
would be the better and much simpler solution.
Upvotes: 2