Reputation: 1
I am using infinispan 8.2.11. During iteration through cache with use of cache.entrySet().iterator() the thread gets stuck and not move. Here is the thread dump I collected :
"EJB default - 32" #586 prio=5 os_prio=0 tid=0x000055ce2f619000 nid=0x2853 runnable [0x00007f8780c7a000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006efb93ba8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at org.infinispan.stream.impl.DistributedCacheStream$IteratorSupplier.get(DistributedCacheStream.java:754)
at org.infinispan.util.CloseableSuppliedIterator.getNext(CloseableSuppliedIterator.java:26)
at org.infinispan.util.CloseableSuppliedIterator.hasNext(CloseableSuppliedIterator.java:32)
at org.infinispan.stream.impl.RemovableIterator.getNextFromIterator(RemovableIterator.java:34)
at org.infinispan.stream.impl.RemovableIterator.hasNext(RemovableIterator.java:43)
at org.infinispan.commons.util.Closeables$IteratorAsCloseableIterator.hasNext(Closeables.java:93)
at org.infinispan.stream.impl.RemovableIterator.getNextFromIterator(RemovableIterator.java:34)
at org.infinispan.stream.impl.RemovableIterator.hasNext(RemovableIterator.java:43)
at org.infinispan.commons.util.IteratorMapper.hasNext(IteratorMapper.java:26)
I found the article in the Jboss Community Archive which describes similar issue : https://developer.jboss.org/thread/271158. There was a fix delivered into infinispan 9 which I belive resolves this problem: ISPN-9080
Is it possible to backport this fix into infinispan-8? Unfortunately, I can't uplift the version of infinispan in my project.
Upvotes: 0
Views: 203
Reputation: 357
In any case I would recommend to think about update the Infinispan version. There are many fixes you might miss, as version 11 is the current stable one, and run into already fixed stuff.
But the problem in your case is that something happened in your cluster and cluster topology updates are missed. If you have a stable cluster this problem will not happen. So if you are able to find the cause for the 'instable' cluster, could be intentional stopping/starting nodes, and this is aceptable you can prevent from it.
Upvotes: 0
Reputation: 909
Unfortunately, we do not maintain such older versions. The suggested approach is to update to a more recent version. If that is not possible you could try patching the older version as you have the changes available https://github.com/infinispan/infinispan/pull/5924/files
Also to note this does not fix the actual issue. This just fixes a symptom of the actual issue. The actual issue is that the newest topology was not installed for some reason, but the original poster was not able to provide sufficient information to ascertain why.
Upvotes: 0