Nathan Fellman
Nathan Fellman

Reputation: 127638

Is there any way to limit the size of an STL Map?

I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating.

The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use std::map for this (though suggestions for alternatives are welcome).

The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached.

If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

Upvotes: 21

Views: 16859

Answers (4)

raz
raz

Reputation: 492

I have the same use case. Though it might not align completely with the question but the concept seems similar.

#include <array>

int main() {
    std::array<std::pair<int, int>, 5> myMap;
    myMap[0] = std::make_pair(1, 10);
    myMap[1] = std::make_pair(2, 20);
    myMap[2] = std::make_pair(3, 30);
    myMap[3] = std::make_pair(4, 40);
    myMap[4] = std::make_pair(5, 50);
    return 0;
}

Upvotes: 0

timday
timday

Reputation: 24892

If you're implementing LRU caches or similar, I've found boost::bimap to be invaluable. Use one index to be the primary "object ID" key (which can be an efficient vector view if you have a simple enumeration of objects), and the other index can be an ordered timestamp.

Upvotes: 2

Daniel Daranas
Daniel Daranas

Reputation: 22644

There is no way to "automatically" limit the size of an std::map, simply because a "limited std::map" would not be an std::map anymore.

Wrap the std::map in a class of your own which just checks the size of the std::map against a constant before inserting, and does whatever you want in case the maximum size has been reached.

Upvotes: 6

First thing is that a map structure is not a hash table, but rather a balanced binary tree. This has an impact in the lookup times O(log N) instead of O(1) for an optimal hash implementation. This may or not affect your problem (if the number of actual keys and operations is small it can suffice, but the emulation of the cache can be somehow suboptimal.

The simplest solution that you can do is encapsulate the actual data structure into a class that checks the size in each insertion (size lookups should be implemented in constant time in all STL implementations) and fails if you are trying to add one too many elements.

class cache {
public:
   static const int max_size = 100;
   typedef std::map<key_t, value_t> cache_map_t;
   void add( key_t key, value_t value ) {
      cache_map_t::const_iterator it = m_table.find( key );
      if ( it != m_table.end() && m_table.size() == max_size ) { 
         // handle error, throw exception...
      } else {
         m_table.insert( it, std::make_pair(key,value) );
      }
   }
private:
   cache_map_t m_table;
};
// Don't forget, in just one translation unit:
const int cache::max_size;

Upvotes: 13

Related Questions