Reputation: 338
I am using Google Cloud TTS for my applications to use more natural voice for playback text. I am writing cloud functions in node.js and sending the text from the client side. Currently using the existing voice from Google Cloud TTS to generate the new audio record. It's working fine.
Now my issue is, I want to use my voice for audio generation which is from the cloud storage. I recorded my voice and uploaded it to Firebase storage. I have a profile generated using AI, and I want to read out that profile using my voice using Google Cloud TTS. Is that possible?
Below is the code for recording the audio with the existing Google Cloud voice
// Construct the request
const request = {
input: {text: text},
// Select the language and SSML voice gender (optional)
voice: {languageCode: 'en-US', ssmlGender: 'NEUTRAL'},
// select the type of audio encoding
audioConfig: {audioEncoding: 'MP3'},
};
try {
// Perform the Text-to-Speech request
const [response] = await client.synthesizeSpeech(request);
// Create a unique file name with timestamp
const fileName = `TextToSpeech/${Date.now()}.mp3`;
// Create a reference to the file in Firebase Storage
const file = admin.storage().bucket().file(fileName);
// Upload the audio content to Firebase Storage
await file.save(response.audioContent);
const downloadURL= await getDownloadURL(file);
console.log('Audio content written to Firebase Storage:', fileName);
console.log('downloadURL', downloadURL);
res.status(200).send({Status: "Success", URL: downloadURL});
} catch (error) {
console.error('Error synthesizing or uploading speech:', error);
res.status(400).send({Status: "failed", "Error": error});
} finally {
// Close the Text-to-Speech client (optional, but recommended for resource management)
await client.close();
}
Upvotes: 0
Views: 53