Reputation: 167
I have a Dialogflow agent that is normally able to speak. However, when inside a function (which calls out to the Spotify api), it does not speak anything I write inside an "agent.add()".
What makes it even more strange is that on my Firebase console for the function, the output of the Spotify api call is actually recorded when inside a "console.log". This means that the Spotify api call functions as normal, but the dialogflow agent cannot read out the result of the SPotify api call - and I have no idea why (important code below).
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @param agent The dialogflow agent
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
return admin.database().ref(`${randomNumber}`).once('value').then((snapshot) => {
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
// Agent vocalises the retrieved song to the user
agent.add(`I reccomend ${song} by ${artist}`);
var tempo = '';
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN').then(
function (data) {
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
tempo = data.body.track.tempo;
agent.add(
`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`
);
var textResponse = `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?`;
agent.add(textResponse);
agent.add(`Here is the song's tempo: ${tempo}`);
return;
},
function (err) {
console.error(err);
}
);
// agent.add(`${agentSays}`);
agent.add(`Here is the tempo for the song: ${tempo}`);
});
});
}
So in the above code, the user is asked by google if they want a recommendation for an angry song. They say 'yes' which runs this function 'playAngrySong'. A song is selected from a database and the user is told the recommended song e.g "I recommend Suck My Kiss by Red Hot Chili Peppers". From this point in the code onwards (where it says var tempo), the agent does not speak anymore (by text of voice).
The console.log lines are written to the function logs however:
var analysis = console.log('Analyser Version', data.body.meta.analyzer_version);
var temp = console.log('Track tempo', data.body.track.tempo);
Lastly, Google support sent this in reply to tmy concern (and have not emailed me back since) - does anyone know what I should do based on their suggestion? I am new to Javascript so have tried adding in the 'async' keyword before the function (as shown in the code here) but I may have been wrong in thinking this was the right way to use it.
Upvotes: 1
Views: 242
Reputation: 167
In addition to the problem above, it was found that the agent never 'reached' the speaking part of the code since the function has to execute fully before the agent can say what the response of the api call is. I learnt that one must use api calls within the 5 second response window that dialogflow gives you otherwise the program will crash or the agent will be mute. So ensure that the intents are well mapped out, perhaps doing an api call necessary for a future intent in an earlier on intent and storing the results in a class variable for later use - this is what I did and now everything works!
Upvotes: 0
Reputation: 86
Your function returns a Promise<void>
while your require a Promise<DatabaseSnapshot>
. admin.database().ref(
${randomNumber}).once('value')
has been already resolved inside your playAngrySong function. I would refactor your code to something like the sample below. Mind you that the code is untested.
/**
* ---------------------------Google Assistant Fulfillment----------------------------------------------------------------------------------------
* Below is the dialogflow firebase fulfillment code which controls what happens when various intents happen:
*/
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({request, response});
/**
* Function controls when the user replies 'yes' to 'Would you like to hear an angry song?'.
* Uses the random number within the bounds of the angry songs to select and recommend a song
* for the user.
* @returns {Promise<admin.database.DataSnapshot | never>} The song of the desired emotion.
*/
//4
async function playAngrySong(agent) {
let tempo = '';
try{
const snapshot = await admin.database().ref(`${randomNumber}`).once('value');
// Get the song, artist and spotify uri (with and without the preceding characters) from the Firebase Realtime Database
const song = snapshot.child('song').val();
const artist = snapshot.child('artist').val();
const spotify_uri = snapshot.child('spotifyCode').val();
const just_uri = snapshot.child('spotifyCode').val();
agent.add(`I reccomend ${song} by ${artist}`);
agent.add(`Here is the tempo for the song (before getAudioAnalysisForTrack call: ${tempo}`);
}catch(exception){
throw {
message:'Failed to read song info',
innerException:exception
};
}
/**
* Callout to the Spotify api using the spotify-web-api node package. LINK PACKAGE.
* Agent vocalises the analysis extracted on the track.
*/
try{
const audioAnalysis = await Spotify.getAudioAnalysisForTrack('4AKUOaCRcoKTFnVI9LtsrN')
var analysis = console.log('Analyser Version', audioAnalysis.body.meta.analyzer_version);
var temp = console.log('Track tempo', audioAnalysis.body.track.tempo);
tempo = audioAnalysis.body.track.tempo;
agent.add( `The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`The track's tempo is, ${tempo}, does this sound good or would you prefer something else?` );
agent.add(`Here is the song's tempo: ${tempo}`);
}catch(exception){
throw {
message:'Failed to connect to spotify',
innerException:exception
};
}
}
playAngrySong(agent)
.then(x=>{
//add your logic
response.status(200).send();
})
.catch(x=>{
//add error handling
response.status(400).send(x.message);
});
});
I'd say it is even better to break this down to smaller functions(e.g. databaseAccess, SpotifyConnect) , but that is beyond the scope of this question.
Upvotes: 1