Elvis
Elvis

Reputation: 125

Appropriate data structure for faster retrieval process (data size: around 200,000 values all string)

I have a large data set of around 200, 000 values, all of them are strings. Which data structure should i use so that the searching and retrieval process is fast. Insertion is one time, so even if the insertion is slow it wouldn't matter much.

Hash Map could be one solution, but what are the other choices?? Thanks

Edit: some pointers 1. I am looking for exact matches and not the partial ones. 2. I have to accomplish this in PHP. 3. Is there any way i can keep such amount of data in cache in form of tree or in some other format?

Upvotes: 3

Views: 638

Answers (5)

Ed Staub
Ed Staub

Reputation: 15690

Use a hashmap. Assuming implementation similar to Java's, and a normal collision rate, retrieval is O(m) - the main cost is computing the hashcode and then one string-compare. That's hard to beat.

For any tree/trie implementation, factor in the hard-to-quantify costs of the additional pipeline stalls caused by additional non-localized data fetches. The only reason to use one (a trie, in particular) would be to possibly save memory. Memory will be saved only with long strings. With short strings, the memory savings from reduced character storage are more than offset by all the additional pointers/indices.

Fine print: worse behavior can occur when there are lots of hashcode collisions due to an ill-chosen hashing function. Your mileage may vary. But it probably won't.

I don't do PHP - there may be language characteristics that skew the answer here.

Upvotes: 0

Juraj Blaho
Juraj Blaho

Reputation: 13471

Maybe a trie data structure.

A trie, or prefix tree, is an ordered tree data structure that is used to store an associative array where the keys are usually strings

Upvotes: 1

ex0du5
ex0du5

Reputation: 2634

You really should consider not using maps or hash dictionaries if all you need is a string lookup. When using those, your complexity guaranties for N items in a lookup of string size M are O(M x log(N)) or, best amortised for the hash, O(M) with a large constant multiplier. It is much more efficient to use an acyclic deterministic finite automaton (ADFA) for basic lookups, or a Trie if there is a need to associate data. These will walk the data structure one character at a time, giving O(M) with very small multiplier complexity.

Basically, you want a data structure that parses your string as it is consumed by the data structure, not one that must do full string compares at each node of the lookup. The common orders of complexity you see thrown around around for red-black trees and such assume O(1) compare, which is not true for strings. Strings are O(M), and that propagates to all compares used.

Upvotes: 1

MduSenthil
MduSenthil

Reputation: 2077

You can B+ trees if you want to ensure your search is minimal at the cost of insertion time.

You can also try bucket push and search.

Upvotes: 0

Hari Menon
Hari Menon

Reputation: 35445

Use a TreeMap in that case. Search and Retrieval will be O(log n). In case of HashMap search can be O(n) worst case, but retrieval is O(1).

For 200000 values, it probably won't matter much though unless you are working with hardware constraints. I have used HashMaps with 2 million Strings and they were still fast enough. YMMV.

Upvotes: 0

Related Questions