Reputation: 2339
There are 3 objects stored in my map - couple of MB each. They don't change so it makes sense to cache them locally at the node. And that's what I thought I was doing before I realized the average get latency is huge and that slows down my computations by large. See that hazelcast console:
This makes my wonder where did it get from. Is it those 90 and 48 misses which I think happend at first? The computations are run in parallel so I figure they could all issue a reguest to get before the entries were even cached and thus all of those would not benefit from near-cache at this point. Is it then some pre-loading method so that I would run it before I trigger all those parallel tasks? Btw. why entry memory is 0 even if there are entries in that near cache data table?
Here is my map config:
<map name="commons">
<in-memory-format>BINARY</in-memory-format>
<backup-count>0</backup-count>
<async-backup-count>0</async-backup-count>
<eviction-policy>NONE</eviction-policy>
<near-cache>
<in-memory-format>OBJECT</in-memory-format>
<max-size>0</max-size>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>NONE</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<cache-local-entries>true</cache-local-entries>
</near-cache>
</map>
The actual question is why there are so many misses in the near cache and is it where that huge average get latency may come from?
Upvotes: 1
Views: 967
Reputation: 2345
The latency that management center shows, is the latency after a request hit the server. If you have a Near Cache and you hit Near Cache, that will not show on the Man.Center. I suspect that you shall not be observing the high latency from your application. I see that there have been 34 events. I assume this entry have been updated. When an entry is updated, it is evicted from Near Cache. The subsequent read will hit the server.
Upvotes: 2