Reputation: 35
I am trying to use Direct Line Speech (DLS) in my custom voice app. The Voice app has access to the real-time audio streams which I want to (pcm encoded) it directly to Direct Line Speech that allows a back and forth communication in real-time.
From the DLS Client sample code (https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client), I see that the method ListenOneAsync() in Microsoft.CognitiveServices.Speech.Dialog.DialogServiceConnector namespace, but looks like it's capturing media from local microphone.
But looking at the reply here (Is new ms botbuilder directline speech good fit for call center scenario?), it seems I can send the audio stream to the DLS directly. I can't seem to find any documentation around this. Can someone shed some light on how to achieve this?
Upvotes: 2
Views: 718
Reputation: 12264
I believe your answer lies in the Microsoft.CognitiveServices.Speech.Audio.AudioConfig
class. Have a look at this line in the Direct Line Speech client:
this.connector = new DialogServiceConnector(config, AudioConfig.FromDefaultMicrophoneInput());
AudioConfig
provides many options besides FromDefaultMicrophoneInput
. I suspect you'll want to use one of the three FromStreamInput
overloads. If you do that then ListenOnceAsync
will use your stream instead of the microphone.
Upvotes: 2