D_S_X
D_S_X

Reputation: 1559

Javascript Object vs Map/Set key lookup performance

I was trying to explore how JavaScript Object performs in comparison to Map or Set for normal key accesses. I ran the below 3 codes on JSBEN.CH.

Objects

const object = {};

for (let i = 0; i < 10000; ++i) {
    object[`key_${i}`] = 1;
}

let result = 0;
  
for (let i = 0; i < 10000; ++i) {
    result += object[`key_${i}`];
}

Maps

const map = new Map();

for (let i = 0; i < 10000; ++i) {
    map.set(`key_${i}`, 1);
}

let result = 0;

for (let i = 0; i < 10000; ++i) {
    result += map.get(`key_${i}`);
}

Sets

const set = new Set();

for (let i = 0; i < 10000; ++i) {
    set.add(`key_${i}`);
}

let result = 0;

for (let i = 0; i < 10000; ++i) {
    result += set.has(`key_${i}`);
}

As you can check in the test link, Map and Set seem to perform almost similar however Objects are much slower every time. Can someone explain what could be the reason that Objects perform worse than Map or Set for basic key access operation?

Edit 1: Just setting keys on Object is also slower than Map/Set.

Upvotes: 21

Views: 20366

Answers (1)

Jonas Wilms
Jonas Wilms

Reputation: 138267

Looking at relative numbers only is always dangerous, here are some absolute numbers, run on NodeJS v14.14.0 on an Intel 8350U:

Iterations Object write Object read Map write Map read
100 0ms 0ms 0ms 0ms
1.000 3ms 1ms 0ms 0ms
10,000 7ms 4ms 8ms 1ms
1.000.000 1222ms 527ms 632ms 542ms

So as one can see, for 10.000 iterations the difference between objects and maps is 1 millisecond in the run above, and as that is the accuracy of the time measurement, we can't really derive any conclusion from that test. The results are absolutely random.

For 1 Million iterations one can see a clear advantage of Map writes over Object writes, the read performance is very similar. Now if we look at absolute numbers, this is still one million writes / s. So although object writes are a lot slower, this will unlikely be the bottleneck of your application.

For an accurate explanation, one would have to analyze all the steps the engine performs. For that you can run node --print-code and analyze the bytecode that gets run. I don't have the time for that, though here are some observations:

  1. If the object gets constructed with Object.create(null) (having no prototype) the performance is about the same, so prototype lookup does not influence performance at all.

  2. After the 20th iteration, V8 chooses the internal representation dictionary_map for object, so this is basically one hash map competing with another hashmap (one can run node --allow-natives-syntax and then use %DebugPrint(object) to get the internal representation).

  3. For objects with more than 2 ** 23 keys, write performance degrades even more, see Performance degrade on JSObject after 2^23 items (though maps also can‘t be much larger - see Maximum number of entries in Node.js Map? )

For reference, here is the code used to run the benchmark:

function benchmark(TIMES) {  
  console.log("BENCHMARK ", TIMES);

  const object = Object.create(null);

    let start = Date.now();
    for (let i = 0; i < TIMES; ++i) {
        object[`key_${i}`] = 1;
    }

    console.log("Object write took", Date.now() - start);
    start = Date.now();

    let result = 0;
  
    for (let i = 0; i < TIMES; ++i) {
      result += object[`key_${i}`];
    }

    console.log("Object read took", Date.now() - start);
    start = Date.now();

  


  const map = new Map();
  
  for (let i = 0; i < TIMES; ++i) {
    map.set(`key_${i}`, 1);
  }
  
  console.log("Map write took", Date.now() - start);
  start = Date.now();

  result = 0;
  
  for (let i = 0; i < TIMES; ++i) {
    result += map.get(`key_${i}`);
  }

  console.log("Map read took", Date.now() - start);

}

benchmark(100);
benchmark(1_000);
benchmark(10_000);
benchmark(1_000_000);

To sum up:

  • Use Maps for dictionaries with lots of different, changing keys as they are slightly better than objects internally represented as hash table
  • Use Objects for - well - objects. If you have a low number of keys and frequently access those, Objects are way faster (as the engine can use inline caching, hidden classes with fixed memory layout etc.)

Upvotes: 38

Related Questions