Reputation: 176
I am using Hmm for speech recognition of separate words. I have trained my Hmms for my database. I calculate and compare likelihood probabilities for an incoming audio signal. The problem I have is different words have different number of optimal states which will give different number of search paths (number of search paths = states^observations ) so probabilities can't be compared. How do I normalize the effect of different number of states?
Upvotes: 0
Views: 413
Reputation: 2507
You need either context free grammar or language model (usually - 3-gram probabilistic model) to recognize utterances rather than single words. Then you use appropriate algorithm to calculate score for each path. I strongly recommend you to take a look at existing solutions like Kaldi or CMUSphinx.
Upvotes: 4