Obi Wan Kenobi
Obi Wan Kenobi

Reputation: 39

performance of declaring objects

I was coding in some sort of university conquest and noticed something, when I declare a map in a loop such as down below:

for (int i = 0; i < n; i++)
{
    map<int, bool> hashMap;
    //...
}

it takes more time than:

map<int, bool> hashMap;
for (int i = 0; i < n; i++)
{
    hashMap.clear();
    //...
}

so I was wondering why does declaring an object in a loop has worse performance than just re-initializing it?

Upvotes: 1

Views: 91

Answers (1)

dxiv
dxiv

Reputation: 17638

In the first version of the code the constructor and destructor of the hashMap are called n times, while in the second version they are called just once.

The natural question is why destructing and constructing a new map object would be noticeably different vs. clearing and reusing one and the same object. This can be definitively answered only by inspecting the actual map code of the implementation being used, so the following is just a plausible guess.

std::map is usually implemented using red-black trees, and it is common for red-black tree implementations to use a sentinel node for NIL-leaves, see for example Benefit of a sentinel node in a red black tree?. This sentinel must be allocated each time a tree is constructed, then released upon destruction. In contrast, clear'ing a map, instead, empties the tree but reuses the associated internal objects.

For a real-life example of such an implementation, Microsoft's _Tree constructor calls the aptly named _Alloc_sentinel_and_proxy();. Since their map derives from _Tree, this gets called each time a map object is constructed.

Upvotes: 3

Related Questions