Reputation: 103
I tested them with this code (on Visual Studio 2010 sp1):
#include <ctime>
#include <iostream>
#include <map>
#include <unordered_map>
#include <hash_map>
int main()
{
clock_t time;
int LOOP = (1 << 16);
std::map<int, int> my_map;
std::unordered_map<int, int> map_unordered_map;
std::hash_map<int, int> my_hash_map;
time = clock();
for (int i = 0; i != LOOP; ++i)
{
my_map[i] = i;
}
std::cout << "map: " << ((double)(clock() - time) / CLOCKS_PER_SEC) << std::endl;
time = clock();
for (int i = 0; i != LOOP; ++i)
{
map_unordered_map[i] = i;
}
std::cout << "unordered_map: " << ((double)(clock() - time) / CLOCKS_PER_SEC) << std::endl;
time = clock();
for (int i = 0; i != LOOP; ++i)
{
my_hash_map[i] = i;
}
std::cout << "hash_map: " << ((double)(clock() - time) / CLOCKS_PER_SEC) << std::endl;
system("PAUSE");
return EXIT_SUCCESS;
}
And the result are so strange:
In DEBUG: map: 0.289 unordered_map: 10.738 hash_map: 10.58 Press any key to continue . . .
In RELEASE: map: 0.101 unordered_map: 0.463 hash_map: 0.429 Press any key to continue . . .
Upvotes: 4
Views: 2080
Reputation: 410
This is an example of the amortized vs worst case cost of an algorithm.
std::map uses a red-black tree that has a O(logN) insert complexity.
std::hash_map uses a hash table that has a O(1) amortized insert complexity.
However, the hash table has a worst case complexity of O(N) when it has to resize the table and rehash the table.
In your case you end up doing a lot of rehashing, so the hash table insert is hitting it's worst case enough that the tree insert becomes faster - O(N) > O(logN).
If you init the hash_map with a large enough table then the hash table will never hit it's worst case and it will be faster then the tree - O(1) < O(logN).
Upvotes: 1
Reputation: 490048
Bottom line: this isn't likely to tell you much about the data structures in question.
Upvotes: 6