Reputation: 4729
I have been doing a bit of experimentation with Amazon Lex but I can't get voice to work in the console at all.
I'm using the Flower bot demo with the associated Python Lambda function connected and working with text on Chrome browser running on a Mac (10.13.1).
I am able to log any text entered into the test bot on the console from the Lambda function along with the rest of the event.
By going to the monitoring tab of the bot in the console I can see utterances from previous days (seems to be a one day delay on utterances appearing wether missed or detected, no idea why…).
I made a bunch of attempts to use voice yesterday that appear in the utterance table as a single blank entry with a count of 13 now that it is the next day. I'm not sure if this means that audio isn't getting to Lex or if Lex can't understand me.
I'm a native English speaker with a generic American accent (very few people can identify where I'm from more specifically than the U.S.) and Siri has no trouble understanding me.
My suspicion is that something is either blocking or garbling the audio before it gets to Lex but I don't know how to find what Lex is hearing to check that.
Are there troubleshooting tools I haven't found yet? Is there a way to get a live feed of what is being fed to a bot under test? (All I see for the test bot is the inspect response section, nothing for inspecting the request.)
Upvotes: 2
Views: 810
Reputation: 428
Go to Monitoring tab of your Bot in Amazon Lex console, click "Utterances", there you can find a list of "Missed" and "Detected" utterance. From the missed utterances table, you can add them to any intent.
Upvotes: 0
Reputation: 2655
In addition to @sid8491's answer, you can get the message that Lex parsed from your speech in the response it returns. This is in the field data.inputTranscript
when using the Node SDK.
CoffeeScript example:
AWS = require 'aws-sdk'
lexruntime = new AWS.LexRuntime
accessKeyId: awsLexAccessKey
secretAccessKey: awsLexSecretAccessKey
region: awsLexRegion
endpoint: "https://runtime.lex.us-east-1.amazonaws.com"
params =
botAlias: awsLexAlias
botName: awsLexBot
contentType: 'audio/x-l16; sample-rate=16000; channels=1'
inputStream: speechData
accept: 'audio/mpeg'
lexruntime.postContent params, (err, data) ->
if err?
log.error err
else
log.debug "Lex heard: #{data.inputTranscript}"
Upvotes: 0
Reputation: 6800
Regarding the one day delay in appearance of utterances, according to AWS documentation:
Utterance statistics are generated once a day, generally in the evening. You can see the utterance that was not recognized, how many times it was heard, and the last date and time that the utterance was heard. It can take up to 24 hours for missed utterances to appear in the console.
Upvotes: 1