Reputation: 1148
I want to add suggested input options (or card or tag ) like in following picture in Dialogflow which i further will integrate with google Assistant..Can any one help me where to begin or start from .I have already seen Slot Filling,Contexts and follow-up Intents.Thanks in Advance..
Upvotes: 0
Views: 1237
Reputation: 50701
You likely want to use a Carousel rich response.
If you're using the actions-on-google library, your code might look something like this:
conv.ask(new Carousel({
items: {
// Add the first item to the carousel
[SELECTION_KEY_ONE]: {
synonyms: [
'synonym of title 1',
'synonym of title 2',
'synonym of title 3',
],
title: 'Title of First Carousel Item',
description: 'This is a description of a carousel item.',
image: new Image({
url: IMG_URL_AOG,
alt: 'Image alternate text',
}),
},
// Add the second item to the carousel
[SELECTION_KEY_GOOGLE_HOME]: {
synonyms: [
'Google Home Assistant',
'Assistant on the Google Home',
],
title: 'Google Home',
description: 'Google Home is a voice-activated speaker powered by ' +
'the Google Assistant.',
image: new Image({
url: IMG_URL_GOOGLE_HOME,
alt: 'Google Home',
}),
},
// Add third item to the carousel
[SELECTION_KEY_GOOGLE_PIXEL]: {
synonyms: [
'Google Pixel XL',
'Pixel',
'Pixel XL',
],
title: 'Google Pixel',
description: 'Pixel. Phone by Google.',
image: new Image({
url: IMG_URL_GOOGLE_PIXEL,
alt: 'Google Pixel',
}),
},
},
}));
If you're using the multivocal library, your configuration for the response might look something like
Local: {
en: {
Response: {
"Action.options.get": {
Template: {
Text: "Here are the results"
Option: {
Type: "carousel",
Items: [
{
Title: "First Option",
Body: "First Body"
},
{
Title: "Second Option",
Body: "Second Body"
},
{
Title: "Third Option",
Body: "You guessed it, third body"
}
]
}
}
}
}
}
}
If you are using JSON, your response should look something more like this
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Choose a item"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"@type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [
{
"optionInfo": {
"key": "first title"
},
"description": "first description",
"image": {
"url": "https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png",
"accessibilityText": "first alt"
},
"title": "first title"
},
{
"optionInfo": {
"key": "second"
},
"description": "second description",
"image": {
"url": "https://lh3.googleusercontent.com/Nu3a6F80WfixUqf_ec_vgXy_c0-0r4VLJRXjVFF_X_CIilEu8B9fT35qyTEj_PEsKw",
"accessibilityText": "second alt"
},
"title": "second title"
}
]
}
}
}
}
}
}
(None of these examples include the suggestion chips, which you also indicated you wanted.)
When you handle the response, your Intent should handle the action actions_intent_OPTION
.
Upvotes: 1
Reputation: 2963
At the bottom of your screenshot are suggested input chips, they are very easy to add. Follow the instructions here: How to create suggested input in actions on google while creating an google assistant app in dialogflow?
Upvotes: 0