Reputation: 15675
After stuff like Hashtable
and Vector
where discouraged, and when the Collections synchronized wrappers came up, I thought synchronization would be handled more efficiently. Now that I looked into the code, I'm surprised that it really is just wrapping the collections with synchronization blocks.
Why are ReadWriteLock
s not included into, for example, SynchronizedMap in Collections? Is there some efficiency consideration that doesn't make it worth it?
Upvotes: 18
Views: 5395
Reputation: 50726
To add on to @JohnVint's answer, consider the problem of iteration. The documentation explicitly requires client synchronization:
It is imperative that the user manually synchronize on the returned list when iterating over it:
List list = Collections.synchronizedList(new ArrayList()); ... synchronized (list) { Iterator i = list.iterator(); // Must be in synchronized block while (i.hasNext()) foo(i.next()); }
Failure to follow this advice may result in non-deterministic behavior.
This only works because the client can share the intrinsic lock on the returned map. If it were to use a read/write lock internally, the returned interface would have to support some way of accessing at least the read lock for safe iteration. That would be complicating the API for a questionable benefit (as others have explained).
Upvotes: 0
Reputation: 32014
It's nothing about efficiency. It's nothing wrong to use synchronized
in Collections.synchronizedMap
. If ReadWriteLock is used to implement the Map, I'd like to call it Collections.LockedMap
;)
Be serious, Collections.synchronizedMap
was written years before Lock
appears. It's an API which cannot be changed after release.
Upvotes: 0
Reputation: 40256
Most of the reasoning has been addressed except the following. SynchronizedMap/Set/List as well as Hashtable and Vector rely on the collection instance being synchronized on. As a result many developers have used this synchronization to ensure atomicity. For instance.
List syncList = Collections.synchronizedList(new ArrayList());
//put if absent
synchronized(syncList){
if(!syncList.contains(someObject)){
syncList.add(someObject);
}
}
This is thread safe and atomic for all operations since the synchronizedList will sync on itself (ie add,remove, get). This is the main reason why the Hashtable class was not retrofitted to support lock-striping similar to the ConcurrentHashMap.
So using a ReadWriteLock for these collections would lose the ability for atomic operations unless you were able to grab the lock instances.
Upvotes: 6
Reputation: 29513
Read-write locks are part of performance optimization, which means it can allow greater concurrency in certain situations. The necessary condition is, that they are applied on data structures which are read most of the time, but not modified.
Under other conditions they perform slightly worse than exclusive locks, which comes natural since they have a greater complexity.
It is most efficient, if the locks of a read-write lock are held typically for a moderately long time and only few modifications on the guarded resources.
Hence, whether read-write locks are better than exclusive locks depends on the use case. Eventually you have to measure with profiling which locks perform better.
Taking this into account it seems fitting to choose an exclusive lock for Collections.synchronizedMap
addressing the general use case instead of the special case with mostly-readers.
Further Links
Refactoring Java Programs for Flexible Locking
They wrote a tool which converts locks in a Java application automatically into ReentrantLocks
and ReadWriteLocks
where appropriate. For measuring purposes they have also provided some interesting benchmarking results:
[...] However, in a configuration where write operations were more prevalent, the version with synchronized blocks was 50% faster than one based on read-write locks with the Sun 1.6.0_07 JVM (14% faster with the Sun 1.5.0_15 JVM).
In a low-contention case with just 1 reader and 1 writer, the performance differences were less extreme, and each of the three types of locks yielded the fastest version on at least one machine/VM configuration (e.g., note that ReentrantLocks were fastest on the 2-core machine with the Sun 1.5.0_15 JVM).
Upvotes: 16
Reputation: 116908
I don't think that using ReadWriteLock
(if that is what you are talking about) is any faster necessarily than using the synchronized
keyword. Both constructs impose locks and erect memory barriers and "happens before" limitations.
You may be talking about doing something smart in Collections.synchronizedMap(...)
and friends where read methods are read locked and write methods write locked for performance. That might work fine with the java.util
collection classes but could cause synchronization problems with Map
s implemented by users if the get()
methods were being counted or something -- i.e. where a method that was "read-only" actually made updates to the collection. Yes, doing this would be a terrible idea.
ConcurrentHashmap
was written to be high performance and uses volatile
fields directly instead of synchronized
blocks. This makes the code significantly more complicated as compared to Collections.synchronizedMap(...)
but also faster. That is the reason why it is recommended for high performance situations over Collections.synchronizedMap(new HashMap<...>())
.
Upvotes: 7
Reputation: 200206
It is because those classes predate the read/write locks, which came into Java rather late (Java 5). They are rarely useful, anyway, due to their hardcoded fine-grained locking.
Upvotes: 3