Reputation: 25
I'm trying to use azure's cognitive service Speech to Text following the Github example in Python
: https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/258949599ec32c8a7b03fb38a53f2033010b6b26/samples/python/console/speech_sample.py#L203
I couldn't find in documentation how to change the speaker language recognition, default is English and I want to transcript an audio in spanish.
I think it's with SpeechRecognizer
function, but I don't get how to use it properly.
This is the code example of Github that I'm using:
def speech_recognize_continuous_from_file():
"""performs continuous speech recognition with input from an audio file"""
# <SpeechContinuousRecognitionWithFile>
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
done = False
def stop_cb(evt):
"""callback that stops continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
# </SpeechContinuousRecognitionWithFile>
Upvotes: 0
Views: 1734
Reputation: 2973
The SpeechConfig
constructor takes a speech_recognition_language
argument. You must specify the language in BCP-47 format.
language = 'es-ES' # or perhaps es-MX?
speech_config = speechsdk.SpeechConfig(subscription=speech_key,
region=service_region,
speech_recognition_language=language)
Additional documentation for the class is here.
Upvotes: 1