Reputation: 75
I've got two questions for using the Windows Speech API.
First: I've got my Speech Recognizer set up to detect sentences of a specific structure--namely a verb followed by a noun, with some wildcard stuff in between. However, I also would it like it to be able to recognize a "Help" and "Exit" command that would not fit this structure. How can I get the grammar to recognize another, fundamentally different, structure?
Second: I am using SemanticResultValue to analyze the content of my sentences. I want there to be multiple words that users can say for the same verb--for example, "Go," "Walk," and "Run" would all translate to the same action in the system. How do I assign multiple values to the same SemanticResultValue?
Upvotes: 0
Views: 63
Reputation: 13932
1) Multiple grammars would be the obvious solution here; one grammar for your verb/noun, and a separate grammar for pure verbs.
2) The SemanticResultValue constructor that takes a GrammarBuilder parameter (SemanticResultValue (GrammarBuilder, Object)
) would be appropriate here.
Upvotes: 1