Reputation: 575
How can I design an algorithm which can return the 10 most frequently used words in a document in O(n) time? If additional space can be used.
I can parse and place the words in a hash map with count . But next I have to sort the values to get the most frequent ones . Also I have to have a mapping btw the values -> Key which cannot be maintained since values may be repeating.
So how can I solve this ?
Upvotes: 4
Views: 4142
Reputation: 12923
It may be done in O(n) if you use the correct data structure.
Consider a Node
, consisting of 2 things:
Node
. All the pointers are initially set to NULL
.Create a root node. Define a "current" Node
pointer, set it to root node initially.
Then walk through all the characters of the document and do the following:
NULL
- allocate it. The current Node
pointer is updated.Node
. Then reset the "current" Node
pointer to point to the root node.By such you build a tree in O(n). Every element (both node and leave) denote a specific word, together with its counter.
Then transverse the tree to find the node with the largest counter. It's also O(n), since the number of elements in the tree is not bigger than O(n).
Update:
The last step is not mandatory. Actually the most common word may be updated during the character processing. The following is a pseudo-code:
struct Node
{
size_t m_Counter;
Node* m_ppNext[255];
Node* m_pPrev;
Node(Node* pPrev) :m_Counter(0)
{
m_pPrev = pPrev;
memset(m_ppNext, 0, sizeof(m_ppNext));
}
~Node()
{
for (int i = 0; i < _countof(m_ppNext) i++)
if (m_ppNext[i])
delete m_ppNext[i];
}
};
Node root(NULL);
Node* pPos = &root;
Node* pBest = &root;
char c;
while (0 != (c = GetNextDocumentCharacter()))
{
if (c == ' ')
{
if (pPos != &root)
{
pPos->m_Counter++;
if (pBest->m_Counter < pPos->m_Counter)
pBest = pPos;
pPos = &root;
}
} else
{
Node*& pNext = pPos->m_ppNext[c - 1];
if (!pNext)
pNext = new Node(pPos);
pPos = pNext;
}
}
// pBest points to the most common word. Using pBest->m_pPrev we iterate in reverse order through its characters
Upvotes: 2
Reputation: 11522
The fastest approach is to use a radix tree. You can store the count of words at the leaf of the radix tree. Keep a separate list of the 10 most frequent words and their occurrence count along with a variable that stores the threshold value necessary to get into this list. Update this list as items are added to the tree.
Upvotes: 2
Reputation: 5842
Maintaining a map of (word,count) will be O(n).
Once the map is built, iterate over the keys and retrieve ten most common keys.
O(n) + O(n)
-- But not exactly happy with this soln coz of the extra amount of external memory needed.
Upvotes: 0
Reputation: 47269
Here's a simple algorithm:
O(n) Run-time.
O(n) Storage for the HashTable + Arrays
(Side note: You can just think of HashTable as a dictionary: a way to store key:value pairs where keys are unique. Technically HashMaps imply asynchronous access, and HashTable imply synchronous.)
Upvotes: 5
Reputation: 331
I would use a ArrayList and a HashTable.
Here is the algorithm I am thinking of,
Loop through all the word in the document.
if (HashTable.contains(word) )
increment count for that word in the HashTable;
else
ArrayList.add(word);
HashTable.add(word);
word count in HashTable = 1;
After looping through the whole document,
Loop through ArrayList<word>
Retrieve the word count for that word from the HashTable;
Keep a list of the top 10 words;
The running time should be O(n) to construct the HashTable and ArrayList. Making the top 10 list should be O(m), where m is the number of unique words. O(n+m) where n>>m --> O(n)
Upvotes: 0