Reputation: 14751
I would like to collect some metrics from various places in a web app. To keep it simple, all these will be counters and therefore the only modifier operation is to increment them by 1.
The increments will be concurrent and often. The reads (dumping the stats) is a rare operation.
I was thinking to use a ConcurrentHashMap. The issue is how to increment the counters correctly. Since the map doesn't have an "increment" operation, I need to read the current value first, increment it than put the new value in the map. Without more code, this is not an atomic operation.
Is it possible to achieve this without synchronization (which would defeat the purpose of the ConcurrentHashMap)? Do I need to look at Guava ?
Thanks for any pointers.
P.S.
There is a related question on SO (Most efficient way to increment a Map value in Java) but focused on performance and not multi-threading
UPDATE
For those arriving here through searches on the same topic: besides the answers below, there's a useful presentation which incidentally covers the same topic. See slides 24-33.
Upvotes: 45
Views: 25631
Reputation: 340
I did a benchmark to compare the performance of LongAdder
and AtomicLong
.
LongAdder
had a better performance in my benchmark: for 500 iterations using a map with size 100 (10 concurrent threads), the average time for LongAdder was 1270ms while that for AtomicLong was 1315ms.
Upvotes: 1
Reputation: 198211
Guava's new AtomicLongMap (in release 11) might address this need.
Upvotes: 22
Reputation: 39576
In Java 8:
ConcurrentHashMap<String, LongAdder> map = new ConcurrentHashMap<>();
map.computeIfAbsent("key", k -> new LongAdder()).increment();
Upvotes: 38
Reputation: 31
Got a necessity to do the same. I'm using ConcurrentHashMap + AtomicInteger. Also, ReentrantRW Lock was introduced for atomic flush(very similar behavior).
Tested with 10 Keys and 10 Threads per each Key. Nothing was lost. I just haven't tried several flushing threads yet, but hope it will work.
Massive singleusermode flush is torturing me... I want to remove RWLock and break down flushing into small pieces. Tomorrow.
private ConcurrentHashMap<String,AtomicInteger> counters = new ConcurrentHashMap<String, AtomicInteger>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public void count(String invoker) {
rwLock.readLock().lock();
try{
AtomicInteger currentValue = counters.get(invoker);
// if entry is absent - initialize it. If other thread has added value before - we will yield and not replace existing value
if(currentValue == null){
// value we want to init with
AtomicInteger newValue = new AtomicInteger(0);
// try to put and get old
AtomicInteger oldValue = counters.putIfAbsent(invoker, newValue);
// if old value not null - our insertion failed, lets use old value as it's in the map
// if old value is null - our value was inserted - lets use it
currentValue = oldValue != null ? oldValue : newValue;
}
// counter +1
currentValue.incrementAndGet();
}finally {
rwLock.readLock().unlock();
}
}
/**
* @return Map with counting results
*/
public Map<String, Integer> getCount() {
// stop all updates (readlocks)
rwLock.writeLock().lock();
try{
HashMap<String, Integer> resultMap = new HashMap<String, Integer>();
// read all Integers to a new map
for(Map.Entry<String,AtomicInteger> entry: counters.entrySet()){
resultMap.put(entry.getKey(), entry.getValue().intValue());
}
// reset ConcurrentMap
counters.clear();
return resultMap;
}finally {
rwLock.writeLock().unlock();
}
}
Upvotes: 1
Reputation: 38526
You're pretty close. Why don't you try something like a ConcurrentHashMap<Key, AtomicLong>
?
If your Key
s (metrics) are unchanging, you could even just use a standard HashMap
(they are threadsafe if readonly, but you'd be well advised to make this explicit with an ImmutableMap
from Google Collections or Collections.unmodifiableMap
, etc.).
This way, you can use map.get(myKey).incrementAndGet()
to bump statistics.
Upvotes: 9
Reputation: 147164
Other than going with AtomicLong
, you can do the usual cas-loop thing:
private final ConcurrentMap<Key,Long> counts =
new ConcurrentHashMap<Key,Long>();
public void increment(Key key) {
if (counts.putIfAbsent(key, 1)) == null) {
return;
}
Long old;
do {
old = counts.get(key);
} while (!counts.replace(key, old, old+1)); // Assumes no removal.
}
(I've not written a do
-while
loop for ages.)
For small values the Long
will probably be "cached". For longer values, it may require allocation. But the allocations are actually extremely fast (and you can cache further) - depends upon what you expect, in the worst case.
Upvotes: 7