Manish
Manish

Reputation: 3521

multimap of map data structure

I am parsing one file which has almost 1 billion(or may be trillion) records. I m using

 struct ltstr
 {
    bool operator()(const char* s1, const char* s2) const
    {
        return strcmp(s1, s2) < 0;
    }
 };

 multimap<char*, map<char*, char*, ltsr>,ltstr > m;

Is this efficient way to use above data structure in C++?

Regards

Upvotes: 0

Views: 244

Answers (1)

Jan Hudec
Jan Hudec

Reputation: 76276

No, it's not. Billions, let alone trillions of records won't fit in operating memory of today's computer. Remember, billion records will consume 32 GB for the overhead of the map alone, further 16 GB for the pointers to the keys and values and obviously n more GB where n is average length of keys and values for the actual data (assuming 64-bit system; in 32-bit system it's only half, but it won't fit in the 3 GB address space limit). There are only few large servers in the world that have such amount of memory.

The only option for working with such huge amount of data is to process them in small batches. If the processing can be done on each element separately, just load one element at a time, process it and discard it. No matter what the size of the data, streaming processing is always faster, because it only requires fixed amount of memory and thus can efficiently take advantage of CPU caches.

If it can't be processed like that, because specific order is needed or you need to look up entries or something, you'll need to prepare the data to appropriate external (on-disk) structure. I.e. sort them with external merge sort (writing the partitions to temporary files), index them with B-tree or hash or such. It's a lot of work. But fortunately there are several libraries that implement these algorithms. I would suggest either:

  • A *DMB, external hashing library, like GDBM, Berkeley DB or ndbm. These provide just external analog of map, simplest, but the API is C-based.
  • The stxxl provides external variants of several external containers and algoritms that work on them. Big advantage is that the API is the same as standard library collections.
  • For more complex data operation, just go for sqlite. It's just as fast and more complex data processing is simply easier to express in SQL.

Upvotes: 1

Related Questions