Reputation: 3521
I am parsing one file which has almost 1 billion(or may be trillion) records. I m using
struct ltstr
{
bool operator()(const char* s1, const char* s2) const
{
return strcmp(s1, s2) < 0;
}
};
multimap<char*, map<char*, char*, ltsr>,ltstr > m;
Is this efficient way to use above data structure in C++?
Regards
Upvotes: 0
Views: 244
Reputation: 76276
No, it's not. Billions, let alone trillions of records won't fit in operating memory of today's computer. Remember, billion records will consume 32 GB for the overhead of the map alone, further 16 GB for the pointers to the keys and values and obviously n more GB where n is average length of keys and values for the actual data (assuming 64-bit system; in 32-bit system it's only half, but it won't fit in the 3 GB address space limit). There are only few large servers in the world that have such amount of memory.
The only option for working with such huge amount of data is to process them in small batches. If the processing can be done on each element separately, just load one element at a time, process it and discard it. No matter what the size of the data, streaming processing is always faster, because it only requires fixed amount of memory and thus can efficiently take advantage of CPU caches.
If it can't be processed like that, because specific order is needed or you need to look up entries or something, you'll need to prepare the data to appropriate external (on-disk) structure. I.e. sort them with external merge sort (writing the partitions to temporary files), index them with B-tree or hash or such. It's a lot of work. But fortunately there are several libraries that implement these algorithms. I would suggest either:
Upvotes: 1