Lance Pollard
Lance Pollard

Reputation: 79208

What is the fastest way to lookup an item from a small set of items by key?

Say I have a class with a fields array. The fields each have a name. Basically, like a SQL table.

 class X {
   foo: String
   bar: String
   ...
 }

What is the way to construct a data structure and algorithm to fetch a field by key such that it is (a) fast, in terms of number of operations, and (b) minimal, in terms of memory / data-structure size?

Obviously if you know the index of the field the fastest would be to lookup the field by index in the array. But I need to find these by key.

Now, the number of keys will be relatively small for each class. In this example there are only 2 keys/fields.

One way to do this would be to create a hash table, such as like this one in JS. You give it the key, and it iterates through each character in the key and runs it through some mixing function. But this is, for one, dependent on the size of the key. Not too bad for the types of field names I am expecting which shouldn't be too large, let's say they usually aren't longer than 100 characters.

Another way to do this would be to create a trie. You first have to compute the trie, then when you do a lookup, each node of the trie would have one character, so it would have name.length number of steps to find the field.

But I'm wondering, since the number of fields will be small, why do we need to iterate over the keys in the string? A possibly simpler approach, as long as the number of fields is small, is to just iterate through the fields and do a direct string match against each field name.

But all of these 3 techniques would be roughly the same in terms of number of iterations.

Is there any other type of magic that will give you the fewest number of iterations/steps?

It seems that there could be a possible hashing algorithm that uses to its advantage the fact that the number of items in the hash table will be small. You would create a new hash table for each class, giving it a "size" (number of fields on the specific class used for this hash table). Somehow maybe it can use this size information to construct a simple hashing algorithm that minimizes the number of iterations.

Is anything like that possible? If so, how would you do it? If not, then it would be interesting to know why its not possible to get any more optimal than these.

Upvotes: 1

Views: 166

Answers (1)

shx2
shx2

Reputation: 64288

How "small" is the field list?

If you keep field-list sorted by key, you can use binary search.

For a very small number of fields (e.g. 4) it will perform about the same number of iterations and key-comparison as linear search, if considering the worst case of linear search. (Linear search would be very efficient (speed and memory) for this case.)

To beat the average case of linear search, you'd need more fields (e.g. 8).

This is as memory efficient as your linear search solution. More memory efficient than trie solution.

Upvotes: 1

Related Questions