Reputation: 1187
Given the same number of array/object layers, each index indicating the same things, what is the best order to nest the arrays and objects in?
I'm making a grid-based game, and I need to store several pieces of information about each square. I know there's no getting around multiple levels of arrays/objects. I've already got it working one way, so before I go changing a huge amount of code (I've got at least 5 functions with a heavy switch statement inside each one(to choose the layer we're working with), and maybe 10 others I'd also have to change, along with the initialization function), I'd like to know if this change actually lines up with best practices.
Currently, I've got it laid out like this: Game.grid.map[x][y][layer][direction]
, so a map with a regular 2D array, and each element is an object, containing the properties of "V","S","M", and "G", each of those containing an array. (Metal, Silicon, Gate, and Via)
I was considering switching to Game.grid.map[layer][x][y][direction]
I would have to change a little bit of the structure (such as where I have var base = this.map[x][y];
and then using base[layer][dir]
But I could easily make this change and use variables baseM, baseG, and baseS. It's just tedious to change it. I would like to know if it would be more efficient to have 1 object with 5 large arrays rather than having a large 2D array with many very small objects.
I read somewhere that in pure arrays, it depends on how often the index changes on each level, with outer levels changing more slowly. By this, it makes sense to change it. I tend to deal with metal all at once, vias all at once, and Silicon and Gates together(ish).
Searching Google brought me to this QA: Fastest way to read/store lots of multidimensional data? (Java) but this only deals with pure arrays, not a mix of objects and arrays
===Edit===
1) So yeah, I missed that that QA I found was java instead of javaScript.
2) I modified their test a bit.
var arr = [];
for (int x=0; x<100; x++){
arr[x] = [];
for (int y=0; y<100; y++){
arr[x][y] = [];
for (int z=0; z<100; z++){
arr[x][y][z] = 1;
}
}
}
This took about 10 ms. (measured by pasting the function into the console and subtracting that time stamp from the 'undefined' time stamp.) Between making the top level an object and making the bottom level a bunch of objects, it was also 10ms. Making all the levels objects took 9ms. This surprised me. But it also means Bergi was right. It doesn't matter. I was concerned about the sheer number of objects in existence, but if 10,000 new objects takes less than 10ms, my 400-object grid shouldn't ever be an issue.
So really the only change in efficiency to be found is in how long it takes you to read/write code labeled in each way. And for me, it's definitely easier to avoid parallel arrays on this one. (I didn't realize this question would be on micro-optimization.) Thank you to all who answered.
Upvotes: 0
Views: 132
Reputation: 19334
How many objects are you talking about? As @Bergi's answer mentions, there is a LOT more to optimize before you head down this road. Technically arrays can be slightly faster to access via index during iteration thank objects via properties, even then it depends on the JS engine's implementation, which can vary a lot in tight situations like arrays of arrays.
Also, depending on the shear number of objects, you could hit memory constraints. In the end, test in the browsers you care most about supporting. that's all it can come down to. Avoid loops across too many items at once (Once you exceed 10k or so it can hiccup, over a million and you can have issues).
Upvotes: 0
Reputation: 664538
Read on parallel arrays and major order of multi-dimensional array. In theory, you can get some speed from cache locality, which depends on how you are going to typically access the array.
In practise, and especially in an interpreted language like JS, it will hardly matter (and there's much else to optimise first). Go for what makes the most sense to you, and stick with that.
Upvotes: 2