Reputation: 1449
I've been researching the Azure Speech Service and combed through the available documentation and online resources, but I haven't found a clear answer. Does Azure Speech Service use a single-model architecture, or does it operate with multiple models? Specifically, do the Speech-to-Text (STT) and Text-to-Speech (TTS) models function independently without sharing training data, or is there some level of interconnection between them? Any insights would be greatly appreciated!
Upvotes: 0
Views: 34