Thomas Tempelmann
Thomas Tempelmann

Reputation: 12043

Compression and Lookup of huge list of words

I have a huge list of multi-byte sequences (lets call them words) that I need to store in a file and that I need to be able to lookup quickly. Huge means: About 2 million of those, each 10-20 bytes in length.

Furthermore, each word shall have a tag value associated with it, so that I can use that to reference more (external) data for each item (hence, a spellchecker's dictionary is not working here as that only provides a hit-test).

If this were just in memory, and if memory was plenty, I could simply store all words in a hashed map (aka dictionary, aka key-value pairs), or in a sorted list for a binary search.

However, I'd like to compress the data highly, and would also prefer not to have to read the data into memory but rather search inside the file.

As the words are mostly based on the english language, there's a certain likelyness that certain "sillables" in the words occur more often than others - which is probably helpful for an efficient algorithm.

Can someone point me to an efficient technique or algorithm for this?

Or even code examples?

Update

I figure that DAWG or anything similar routes the path into common suffixes this way won't work for me, because then I won't be able to tag each complete word path with an individual value. If I were to detect common suffixes, I'd have to put them into their own dictionary (lookup table) so that a trie node could reference them, yet the node would keep its own ending node for storing that path's tag value.

In fact, that's probably the way to go:

Instead of building the tree nodes for single chars only, I could try to find often-used character sequences, and make a node for those as well. That way, single nodes can cover multiple chars, maybe leading to better compression.

Now, if that's viable, how would I actually find often-used sub-sequences in all my phrases? With about 2 million phrases consisting of usually 1-3 words, it'll be tough to run all permutations of all possible substrings...

Upvotes: 8

Views: 2222

Answers (5)

hmuelner
hmuelner

Reputation: 8221

Have a look at the paper "How to sqeeze a lexicon". It explains how to build a minimized finite state automaton (which is just another name for a DAWG) with a one-to-one mapping of words to numbers and vice versa. Exactly what you need.

Upvotes: 1

Zarkonnen
Zarkonnen

Reputation: 22478

Have you tried just using a hash map? Thing is, on a modern OS architecture, the OS will use virtual memory to swap out unused memory segments to disk anyway. So it may turn out that just loading it all into a hash map is actually efficient.

And as jkff points out, your list would only be about 40 MB, which is not all that much.

Upvotes: 0

andrewjsaid
andrewjsaid

Reputation: 489

There exists a data structure called a trie. I believe that this data structure is perfectly suited for your requirements. Basically a trie is a tree where each node is a letter and each node has child nodes. In an letter based trie, there would be 26 children per node.

Depending on what language you are using this may be easier or better to store as a variable length list while creation.

This structure gives: a) Fast searching. Following a word of length n, you can find the string in n links in the tree. b) Compression. Common prefixes are stored.

Example: The word BANANA and BANAL both will have B,A,N,A nodes equal and then the last (A) node will have 2 children, L and N. Your Nodes can also stored other information about the word.

(http://en.wikipedia.org/wiki/Trie)

Andrew JS

Upvotes: 7

Niki Yoshiuchi
Niki Yoshiuchi

Reputation: 17551

I would recommend using a Trie or a DAWG (directed acyclic word graph). There is a great lecture from Stanford on doing exactly what you want here: http://academicearth.org/lectures/lexicon-case-study

Upvotes: 2

You should get familiar with Indexed file.

Upvotes: 0

Related Questions