Reputation: 889
I have a provider class that associates an object of mine to a string; in a few words, my provider wraps a (let's say Map<String, Object>
) and has the 3 following operations:
Object get(String key)
: massively run by many threadsvoid add(String key, Object obj)
: run by a single threadvoid remove(String key)
: run by the same single thread than add
While the get()
operation needs to be fast and scalable, the add()
and remove()
operations does not have strong performance requirements
I would like to avoid to use a ConcurrentHashMap
as it will certainly leads to scalability issues
So my idea is to wrap a single HashMap
and do it such as:
get()
operation does a HashMap.get()
add()
operation
HashMap
HashMap
originalHashMap = copiedHashMap
remove()
operation (almost the same)
HashMap
HashMap
originalHashMap = copiedHashMap
It seems to me that this way of doing is perfectly scalable. What do you think of it?
I think that for my development, I need to wrap my Map
attribute in an AtomicReference<>
: what do you think of that?
Thank you for your help
Upvotes: 1
Views: 129
Reputation: 889
OK,
Thank you for your contribution. I was sure that ConcurrentHashMap
was not able to scale so I imagined the design explained above... but I was wrong! In fact what I had in mind was rather the behavior of SynchronizedHashMap.
FYI, I found this interesting page: https://www.javamex.com/tutorials/concurrenthashmap_scalability.shtml
Once again, thank you
Upvotes: -1
Reputation: 82589
I'm seriously confused here. You say you want to avoid scalability issues, and your solution is to copy your entire map on every write command?
You may or may not be aware of this, but that's essentially what ConcurrentHashMap does under the hood. Only it does it in such a way that it only does it for a small part of the map that is being written to, and it only performs the copy when it absolutely has to. And it was written by some of the smartest minds in the industry, and has been extensively tested both in the lab and in the wild.
So instead of writing your own which is almost certainly going to be slower and have orders of magnitude more bugs, why not just go with ConcurrentHashMap?
Seriously, this is a solved problem, friend.
Upvotes: 2