How to use dialogflow webhook v2/v1 to response rich messages - dialogflow-es

Scenario trying to achieve :
When user says "approvals" bot has to talk to api/webhook and response with a list with title and small description
Title 1
abcd
Title 2
efgh
and the user will click select anyone out of it.
Integration type : Website integration
I would like to use nodejs to use as webhook v2 and is there any sample specific to the this .
I saw in v1 webhook there is just a option to send one text as reply . I dont know maybe it supports in v2 can anyone share some sample and information
return res.json({
speech: 'text',
displayText: 'title',
source: 'getevents'
});

You can use Quick Replies Message Object in V1.
Just reply the following:
{
'messages': [
{
'type': 2,
'platform': 'line',
'title': 'title',
'replies': [
'select one',
'select one',
]
},
]
}

In Dialogflow webhook it defines the JSON payload format when Google Actions invokes your fulfilment through Dialogflow v2. So dialogflow natively doesn't support list rich responses, one needs to apply the JSON code equipped from google actions
Here is the sample code for the list template
"messages": [
{
"items": [
{
"description": "Item One Description",
"image": {
"url": "http://imageOneUrl.com"
"accessibilityText": "Image description for screen readers"
},
"optionInfo": {
"key": "itemOne",
"synonyms": [
"thing one",
"object one"
]
},
"title": "Item One"
},
{
"description": "Item Two Description",
"image": {
"url": "http://imageTwoUrl.com"
"accessibilityText": "Image description for screen readers"
},
"optionInfo": {
"key": "itemTwo",
"synonyms": [
"thing two",
"object two"
]
},
"title": "Item Two"
}
],
"platform": "google",
"title": "Title",
"type": "list_card"
}
]
You can find out more from this source link ,
And a tutorial on how to implement this using fulfilment webhook can be found here
But if you want to avoid this hassle, you can integrate dialogflow with some third-party application such as Kommunicate to build every rich message. Where they have the means to implement rich messages using custom payload Dialogflow and Google Assistant and Kommunicate supports all types of rich messages like buttons, links, images to card carousel etc and provide sample codes for the same. For more detailed information you check this article
Disclaimer: I work for Kommunicate

Related

How to trigger action from Chip Suggestions in Dialog Flow?

I want to create a ChatBot where the user (mostly) selects from Chip Suggestions.
I can't understand how to construct the Chip Suggestions in Flask.
The following yields null:
#app.route('/webhook', methods=['POST'])
def webhook():
two_chips = jsonify(fulfillment_text="This message is from Dialogflow's testing!",
fulfillment_messages=[
{
"payload": {
"richContent": [
[
{
"type": "chips",
"options": [
{
"text": "HIV Testing Schedule",
"link": "https://example.com" #Links work, but I don't want links
},
{
"link": "https://example.com",
"text": "PreP"
}
]
}
]
]
}
}])
return two_chips
Ideally, the button clicking would trigger a new action/intent and the bot would respond with more specific text. I.e. what should I replace the link field with?
This link suggests that there is a replyMetadata field, but that seems to be specific to kommunicate, not Google?
I looked flask-dialogflow, but the documentation is too sparse and conflicting for me.
Those chips which require a link, should be replaced by a list 1. List items are clickable and trigger an intent via events 2 (to make the bot respond with more specific text).
To get started, update your code to use lists and then add the event name you'd like to trigger in your code. Then add that same event name to the Events section of the intent you want to trigger.
Here is an example of what that can look like. I tested a list and clicked on a list item to triggered a test event that ran my test intent:
Are you looking for suggestion chips like the one below?
The sample payload that you have shared is from Kommunicate [Disclaimer: I am founder #kommunicate] and it is specific to Kommunicate platform for link buttons. Seems like what you are looking for is direct buttons/suggestion chips, here is the right doc from Kommunicate for this: https://docs.kommunicate.io/docs/message-types#suggested-replies
As Kommunicate supports omnichannel and multiple platforms web, android, iOS, whatsapp, LINE, facebook, etc so Kommunicate supports its own rich message payload along with Dialogflow specific payload.
For Dialogflow specific suggestion chips, use:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "These are suggestion chips."
}
},
{
"simpleResponse": {
"textToSpeech": "Which type of response would you like to see next?"
}
}
],
"suggestions": [
{
"title": "Suggestion 1"
},
{
"title": "Suggestion 2"
},
{
"title": "Suggestion 3"
}
],
"linkOutSuggestion": {
"destinationName": "Suggestion Link",
"url": "https://assistant.google.com/"
}
}
}
}
}
Source: https://developers.google.com/assistant/conversational/df-asdk/rich-responses#df-json-suggestion-chips

Can we trigger the simple response after the list response is displayed in the Dialogflow?

Whenever the user invokes my agent then it shows a list of options to select and also a simple response but the agent first speaks the simple response and then shows the list
Actual
user: Ok google talk to my test app.
bot: Welcome to my test app, Here's the list of options to select. (WELCOME MESSAGE)
Please select your preference (RESPONSE)
<list appears> (LIST)
Expected
user: Ok google talk to my test app.
bot: Welcome to my test app, Here's the list of options to select. (WELCOME MESSAGE)
<list appears> (LIST)
Please select your preference. (RESPONSE)
Is it possible that the assistant first speaks the welcome message,shows the list and then speaks out the response after a certain delay?
No, showing the bubble after the list is not possible.
When you add a list to your response, the spoken text will always appear before the list. This is mainly due to the fact that the spoken/chat part of the conversation is separate from the visual part of your conversation. Even when adding the response after the list in your code, the displaying of rich response is controlled by Google.
Example:
conv.ask('This is a list example.');
// Create a list
conv.ask(new List({
title: 'List Title',
items: {
'SELECTION_KEY_ONE': {
synonyms: [
'synonym 1',
'synonym 2',
'synonym 3',
],
title: 'Title of First List Item',
description: 'This is a description of a list item.',
image: new Image({
url: 'https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png',
alt: 'Image alternate text',
}),
},
'SELECTION_KEY_TWO': {
synonyms: [
'synonym 4',
'synonym 5',
'synonym 6',
],
title: 'Title of Second List Item',
description: 'This is a description of a list item.',
image: new Image({
url: 'https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png',
alt: 'Image alternate text',
}),
}
}
}));
conv.ask("Please make your selection");
By the look of your example it seems as if you are trying to show the user a couple options on the screen to control the conversation, are you sure Suggestion Chips wouldn't be a better fit for this? These chips are intended to give the user options and are far easier to implement than a list.
Delaying the speech, not the bubble
If you don't want to go that way, what you could do, is add an delay in the spoken text via SSML, but this would only change the experience for people using your action via voice. It wouldn't change the display location of the speech bubble when using the Google Assistant on your phone. For anyone using your action without a screen, this could cause confusion because the speech is being delayed for a list, which is never going to show on their device since it has no screen.
Design in a voice first experience
In general it is a good practice to design your conversation around the voice only part of your conversation. By making your conversation dependable on a list, you limit the amount of platforms you can deploy your action to. A voice first approach to this problem could be to create intents for each option your action supports and opening your welcome intent with a generic message such as "How can I assist you?" and having a fallback intent in which you assist the user by speaking out the different options that they can use. This could be combined with Suggestion Chips to still give the guiding visuals that you desire.
It is a bit more work to implement, but it does give your bot a great more amount of flexibility in its conversation and the amount of platforms it can support.
Add webhook to your action and use the Browsing Carousel JSON for the intent. Add simpleReponse node after the list items to add a response after list is displayed. Sample JSON for Browsing Carousel:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of a browsing carousel."
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Title of item 1",
"openUrlAction": {
"url": "https://example.com"
},
"description": "Description of item 1",
"footer": "Item 1 footer",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
}
},
{
"title": "Title of item 2",
"openUrlAction": {
"url": "https://example.com"
},
"description": "Description of item 2",
"footer": "Item 2 footer",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
}
}
]
}
}
]
}
}
}
}
Refer to https://developers.google.com/assistant/conversational/rich-responses#df-json-basic-card

List Template for FB using Bot framework node.js V4

Can anyone please help me to include list view in Facebook Channel using Bot framework? I saw examples as shown here List template. I don't know whether this is the exact way in which we need to give the attachments. Also I didn't know the equivalent for sourceEvent method in Bot framework v4. Another useful link is as follows FB Messenger Message Template. See the image given below. I need to put the link for the image and once we click the link it should open another page also the image should be clickable image as in the example for C# Clickable HeroCard images using tap property. Both functionality should work. I tried using HeroCard (but the url that needs to open-up had CORS origin issue. I tried using Adaptive card but it is not supported in Facebook as of now. So, I thought to use List Template for Facebook. Is there anyway to achieve this?
You can send Facebook List Templates through the Microsoft BotFramework by adding the Facebook attachment to the activity's channel data. The list template type doesn't seem to be supported, but you can set the type to generic and add multiple elements to the attachment to get the same result. See the example below.
await turnContext.sendActivity({
channelData: {
"attachment": {
"type": "template",
"payload": {
"template_type": "generic",
"elements": [
{
"title": "Three Strategies for Finding Snow",
"subtitle": "How do you plan a ski trip to ensure the best conditions? You can think about a resort’s track record, or which have the best snow-making machines. Or you can gamble.",
"image_url": "https://static01.nyt.com/images/2019/02/10/travel/03update-snowfall2/03update-snowfall2-jumbo.jpg?quality=90&auto=webp",
"default_action": {
"type": "web_url",
"url": "https://www.nytimes.com/2019/02/08/travel/ski-resort-snow-conditions.html",
"messenger_extensions": false,
"webview_height_ratio": "tall"
},
"buttons": [{
"type":"element_share"
}]
},
{
"title": "Viewing the Northern Lights: ‘It’s Almost Like Heavenly Visual Music’",
"subtitle": "Seeing the aurora borealis has become a must-do item for camera-toting tourists from Alaska to Greenland to Scandinavia. On a trip to northern Sweden, the sight proved elusive, if ultimately rewarding.",
"image_url": "https://static01.nyt.com/images/2019/02/17/travel/17Northern-Lights1/17Northern-Lights1-superJumbo.jpg?quality=90&auto=webp",
"default_action": {
"type": "web_url",
"url": "https://www.nytimes.com/2019/02/11/travel/northern-lights-tourism-in-sweden.html",
"messenger_extensions": false,
"webview_height_ratio": "tall"
},
"buttons": [{
"type":"element_share"
}]
},
{
"title": "Five Places to Visit in New Orleans",
"subtitle": "Big Freedia’s rap music is a part of the ether of modern New Orleans. So what better authentic travel guide to the city that so many tourists love to visit?",
"image_url": "https://static01.nyt.com/images/2019/02/17/travel/17NewOrleans-5Places6/17NewOrleans-5Places6-jumbo.jpg?quality=90&auto=webp",
"default_action": {
"type": "web_url",
"url": "https://www.nytimes.com/2019/02/12/travel/big-freedia-five-places-to-eat-and-visit-in-new-orleans.html",
"messenger_extensions": false,
"webview_height_ratio": "tall"
},
"buttons": [{
"type":"element_share"
}]
}]
}
}
}
});
Hope this helps!

Issue in handling inputs from previous buttons generated by prompts in Slack in Microsoft Bot builder node.js

In the condition when there are multiple buttons present on the same chat history, the user may click the button from previous messages, therefore I am not able to identify from which dialog/message the input came from.
Example:
As chat bot is being implemented for multiple channels, I am avoiding to use Slack's interactive messages, so my aim is to handle this on bot framework itself.
I tried getting information from session object as well as event_source but couldn't figure it out for a concrete solution.
Use a unique ID in callback_id in your button attachment to distinguish between different sets of buttons, e.g. between prompt #1 and prompt #2. The callback_id will be included in the request that Slack sends to your app once a button is pressed.
Together with the general context information of a request like Slack Team ID, channel ID, user ID your app should be able to react correctly.
Example for button definition (from official documentation):
{
"text": "Would you like to play a game?",
"attachments": [
{
"text": "Choose a game to play",
"fallback": "You are unable to choose a game",
"callback_id": "wopr_game",
"color": "#3AA3E3",
"attachment_type": "default",
"actions": [
{
"name": "game",
"text": "Chess",
"type": "button",
"value": "chess"
},
{
"name": "game",
"text": "Falken's Maze",
"type": "button",
"value": "maze"
},
{
"name": "game",
"text": "Thermonuclear War",
"style": "danger",
"type": "button",
"value": "war",
"confirm": {
"title": "Are you sure?",
"text": "Wouldn't you prefer a good game of chess?",
"ok_text": "Yes",
"dismiss_text": "No"
}
}
]
}
]
}

I'm sending an Api.ai carousel message to Smooch.io but it ends up being rendered as text

I have explored smooch.io. the format of sending rich messages to smooch.io is as follows:
{
"role": "appMaker",
"type": "carousel",
"items": [{
"title": "Tacos",
"description": "Description",
"mediaUrl": "http://example.org/image.jpg",
"actions": [{
"text": "Select",
"type": "postback",
"payload": "TACOS"
}, {
"text": "More info",
"type": "link",
"uri": "http://example.org"
}]
}, {
"title": "Ramen",
"description": "Description",
"mediaUrl": "http://example.org/image.jpg",
"actions": [{
"text": "Select",
"type": "postback",
"payload": "RAMEN"
}, {
"text": "More info",
"type": "link",
"uri": "http://example.org"
}]
}]
}
BUT when i send this JSON response through api.ai to smooch.io , it gets error. Though it easily displays simple text message.
How can i send this json message as an object to smooch. Is there any way to send it like the Facebook object?
All i want is to send a carousel to the user.
The Smooch API defines its own carousel JSON structure:
http://docs.smooch.io/rest/#carousel-message
The advantage of this is that Smooch can adapt this generic carousel format into any channel that support rendering them (Facebook Messenger, LINE messenger, and Telegram for example).
Update:
(Disclaimer: I work on Smooch)
What you're getting is a text-only fallback rendering of your carousel. This is what Smooch sends for channels that do not yet support it.
Carousels do not currently render fully in the Smooch Web Messenger, though it is in our backlog. The updated list of supported carousel channels can be found in the Channel Support section here: http://docs.smooch.io/rest/#carousel-message
For cards\carousels we had to map api.ai json to the smooch json called by the Smooch webhook.

Resources