Reputation: 1
I'm seeing very slow times iterating over a Chronicle Map - in the below example 93ms per iteration over 1M entries on my 2013 MacbookPro. I'm wondering if there's a better way to iterate or something I'm doing wrong or if this is expected? I know Chronicle Map isn't optimized for iterating but this ticket from a few years ago made me expect much faster iteration times. Toy example below:
public static void main(String[] args) throws Exception {
int numEntries = 1_000_000;
int numIterations = 1_000;
int avgEntrySize = BitUtil.SIZE_OF_LONG + BitUtil.SIZE_OF_INT;
ChronicleMap<IntValue, ByteBuffer> map = ChronicleMap.of(IntValue.class, ByteBuffer.class)
.name("test").entries(numEntries).averageValueSize(avgEntrySize)
.putReturnsNull(true).create();
IntValue value = Values.newHeapInstance(IntValue.class);
ByteBuffer buffer = ByteBuffer.allocate(avgEntrySize);
for (int i = 0; i < numEntries; i++) {
value.setValue(i);
buffer.clear();
buffer.putLong(i);
buffer.putInt(i);
buffer.flip();
map.put(value, buffer);
}
System.out.println("Finished insertion");
for (int i = 0; i < numIterations; i++) {
map.forEachEntry(entry -> {
Data<ByteBuffer> data = entry.value();
ByteBuffer val = data.get();
});
}
System.out.println("Finished priming");
long start = System.currentTimeMillis();
for (int i = 0; i < numIterations; i++) {
map.forEachEntry(entry -> {
Data<ByteBuffer> data = entry.value();
ByteBuffer val = data.get();
});
}
System.out.println(
"Elapsed: " + (System.currentTimeMillis() - start) + " for " + numIterations
+ " iterations");
}
Output: Finished insertion Finished priming Elapsed: 93327 for 1000 iterations
Upvotes: 0
Views: 1102
Reputation: 15283
Your results: 93 milliseconds per 1 million keys exactly match the result of benchmark here: http://jetbrains.github.io/xodus/#benchmarks, so it's in the expected ballpark. 93 ms / 1m keys is 93 ns per key, it "very slow" compared to what? Your map contains 16 MB of payload and it's total off-heap size is ~ 30 MB (FYI you can check that by map.offHeapMemoryUsed()
), that is much more than the volume of L3 memory in consumer laptops, so iteration speed is bound by the latency of the main memory. Chronicle Map's iteration is mainly not sequential, so memory prefetch doesn't work. I've created an issue about this.
Also several notes about your code:
constantValueSizeBySample(ByteBuffer.allocate(12))
instead of averageValueSize()
. Even if the map value size wasn't constant, it's preferred to use averageValue()
instead of averageValueSize()
, because you cannot be sure how many bytes serializers use for the values.IntValue
.Upvotes: 1