How to trigger action from Chip Suggestions in Dialog Flow? - python-3.x

I want to create a ChatBot where the user (mostly) selects from Chip Suggestions.
I can't understand how to construct the Chip Suggestions in Flask.
The following yields null:
#app.route('/webhook', methods=['POST'])
def webhook():
two_chips = jsonify(fulfillment_text="This message is from Dialogflow's testing!",
fulfillment_messages=[
{
"payload": {
"richContent": [
[
{
"type": "chips",
"options": [
{
"text": "HIV Testing Schedule",
"link": "https://example.com" #Links work, but I don't want links
},
{
"link": "https://example.com",
"text": "PreP"
}
]
}
]
]
}
}])
return two_chips
Ideally, the button clicking would trigger a new action/intent and the bot would respond with more specific text. I.e. what should I replace the link field with?
This link suggests that there is a replyMetadata field, but that seems to be specific to kommunicate, not Google?
I looked flask-dialogflow, but the documentation is too sparse and conflicting for me.

Those chips which require a link, should be replaced by a list 1. List items are clickable and trigger an intent via events 2 (to make the bot respond with more specific text).
To get started, update your code to use lists and then add the event name you'd like to trigger in your code. Then add that same event name to the Events section of the intent you want to trigger.
Here is an example of what that can look like. I tested a list and clicked on a list item to triggered a test event that ran my test intent:

Are you looking for suggestion chips like the one below?
The sample payload that you have shared is from Kommunicate [Disclaimer: I am founder #kommunicate] and it is specific to Kommunicate platform for link buttons. Seems like what you are looking for is direct buttons/suggestion chips, here is the right doc from Kommunicate for this: https://docs.kommunicate.io/docs/message-types#suggested-replies
As Kommunicate supports omnichannel and multiple platforms web, android, iOS, whatsapp, LINE, facebook, etc so Kommunicate supports its own rich message payload along with Dialogflow specific payload.
For Dialogflow specific suggestion chips, use:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "These are suggestion chips."
}
},
{
"simpleResponse": {
"textToSpeech": "Which type of response would you like to see next?"
}
}
],
"suggestions": [
{
"title": "Suggestion 1"
},
{
"title": "Suggestion 2"
},
{
"title": "Suggestion 3"
}
],
"linkOutSuggestion": {
"destinationName": "Suggestion Link",
"url": "https://assistant.google.com/"
}
}
}
}
}
Source: https://developers.google.com/assistant/conversational/df-asdk/rich-responses#df-json-suggestion-chips

Related

Having trouble with Google Assistant repeating previous message in Dialogflow

I'm working on a very simple Dialogflow with about 15-20 intents. All of these intents use a text response except one. The only intent that does not use a text response is called 'repeat'. The intent (repeat) should be able to repeat whatever was previously said by Google Assistant.
I've tried to set this up using Multivocal but have not been successful. When I type a command into the test simulator I'll get the initial response, but when I follow up with 'repeat' the default response of 'Not available' is returned. The webhook times out when I look at the Diagnostic Info. My sense is that I've configured something wrong because I've read these answers and not been able to solve my problem:
How to repeat last response of bot in dialogflow
Dialogflow - Repeat last sentence (voice) for Social Robot Elderly
Use multivocal libary to configure repeat intent in Dialogflow for VUI
I'm using the inline editor within Dialogflow my index.js looks like:
const Multivocal = require('multivocal');
const conf = {
Local: {
en: {
Response: {
"Action.multivocal.repeat": "Let me try again",
}
}
}
};
new Multivocal.Config.Simple( conf );
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
And my package.json includes the Multivocal dependency:
"multivocal": "^0.15.0"
My understanding based on the above SO questions is that these config values would be enough and I don't need to do any coding, but I'm clearly screwing something (many things?) up. How can I get the prior response in Google Assistant to repeat when a user says 'repeat' or something similar? Multivocal seems like a simple solution, if I can do it that way.
Additional logs:
Fulfillment request (removed project id information):
{
"responseId": "--",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"outputContexts": [
{
"name": "project info",
"parameters": {
"no-input": 0,
"no-match": 0
}
}
],
"intent": {
"name": "project info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "project info"
}
Raw API response (removed project and response id)
{
"responseId": "",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"intent": {
"name": "projects info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 4992
},
"languageCode": "en"
},
"webhookStatus": {
"code": 4,
"message": "Webhook call failed. Error: DEADLINE_EXCEEDED."
}
}
My simple intent that I've added based on the recommendation that for repeat to work on an intent it must use fulfillment and not based text response in Dialogflow
Here is my index.js file using the inline editor with suggestion to add text response in the config:
const conf = {
Local: {
en: {
Response: {
"Intent.help": [
"I'm sorry, I'm not able to help you.",
"You, John, Paul, George, and Ringo ey?"
],
"Action.multivocal.repeat": "Let me try again"
}
}
}
};
This line at the end of my index.js seems odd to me, but may be unrelated:
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
It sounds like you're triggering the Fallback Intent. You also need an Intent defined in Dialogflow that has an Action set to "multivocal.repeat". That might look something like this:
In the dialogflow directory of the npm package (or on github) you'll find a zip file with this and several other "standard" Intents that you can use with mulivocal.
Additionally, all the other Intents that you want to be repeated must use fulfillment to send the response (the library doesn't know what might be sent unless it can send it itself). The simplest way to do this is to enable fulfillment on each, and move the text responses from their Dialogflow screens into the configuration under an entry such as "Intent.name" (replacing "name" with the name of the Intent) or "Action.name" if you set an action name for them.
So your configuration might be something like
const conf = {
Local: {
en: {
Response: {
"Intent.welcome": [
"Hello there!",
"Welcome to my Action!"
],
"Action.multivocal.repeat": [
"Let me try again"
]
}
}
}
};

Unable to get the rich buttons to work on dialogflow. It returns Empty Response

So I have been trying to start off with creating chatbots on dialogflow. The issue I am running into is that my chatbot is meant to give users a bunch of options to select from and to take the conversation further from there. In order to implement that, I've used a suggestion chip I've included the JSON file that I am using. However, when testing, the bot detects the right intent but returns an empty response. I've included the code in case that helps.
{
"richContent": [
[
{
"options": [
{
"text": "Chip 1",
"image": {
"src": {
"rawUrl": "https://example.com/images/logo.png"
}
},
"link": "https://example.com"
},
{
"link": "https://example.com",
"text": "Chip 2",
"image": {
"src": {
"rawUrl": "https://example.com/images/logo.png"
}
}
}
],
"type": "chips"
}
]
]
}
If you are testing it in the console, you definitely won't get any response, since there is some issue with the way console works. Use dialogflow messenger to test it. Anyways in the actual production you won't be using simulator to test it. Dialogflow console is a simulator which has some capability restrictions but it will work when you use the messenger.

I would like to add one more conversation to actions.json

I have an application within watson assistant that consumes many services from other endpoints. and I would like to call this conversation (from watson) within a google assistant conversation in a certain intention. for example i will develop a rich conversation on google assistant and in one of the options i will call watson's conversation.
I tried as follows, but it didn't work. does anyone know any example that can help me?
{"locale": "pt-BR",
"actions": [
{
"description": "Launch intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "mainConversation"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"description": "Direct access",
"name": "BUY",
"fulfillment": {
"conversationName": "ExampleAction"
},
"intent": {
"name": "com.example.ExampleAction.BUY",
"trigger": {
"queryPatterns": [
"teste",
"azul",
"start"
]
}
}
}
],
"conversations": {
"mainConversation": {
"name": "mainConversation",
"url": "https://us-central1-ericanovo-798cc.cloudfunctions.net/webhook",
"fulfillmentApiVersion": 2
},
"BUY": {
"name": "ExampleAction",
"url": "https://orquestrador-sulamerica-teste.mybluemix.net/api/v1/chat/google?externaltoken=574213c0-e904-11e9-9970-ff484aa25334",
"fulfillmentApiVersion": 2
}
}
}
thanks
That won't work because the webhook for everything published under the same project has to be the same URL. You are expected to handle all the Intents and "actions" at that webhook.
In your case, you would also need to make sure the request is formatted the way the Watson API would be expecting it. The Assistant will send it using the Conversation Webhook Format, and it sounds like you would send it using Watson's Analyze Text API.
You're not showing any of your code, so it is difficult to be sure - but the first would be in a JSON format that you can extract. You can then use a library in Node (such as request-promise to make the calls to Watson. Based on the result from Watson, you'd need to format the results as a response and return it to the Assistant.
It isn't clear why you'd need multiple webhooks specifically, although it is certainly possible that some Intents may make different API calls than others.
Keep in mind that your custom Intents will only be valid on invocation. Subsequent Intents will all be TEXT Intents.

Custom Payload with dialogflow on google plateform

I am trying to sent custom payload in an dialogflot intent. When i am selecting the custom payload option available under google assistant it gives the following predefined json format : -
{
"google": {
}
}
now i am not aware about what i need to put in there in order to get a response from here. Any guide will be helpful
There are some compulsory Keys to be added in Rich Response JSON.
You must have Suggestion Chips and a Simple Response to maintain the follow-up of your Action. AoG rejects any action with missing Suggestion Chips or Follow-Up Response.
Refer to this JSON for Basic Card Response:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of a basic card."
}
},
{
"basicCard": {
"title": "Title: this is a title",
"subtitle": "This is a subtitle",
"formattedText": "This is a basic card. Text in a basic card can include \"quotes\" and\n most other unicode characters including emojis. Basic cards also support\n some markdown formatting like *emphasis* or _italics_, **strong** or\n __bold__, and ***bold itallic*** or ___strong emphasis___ as well as other\n things like line \nbreaks",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://assistant.google.com/"
}
}
],
"imageDisplayOptions": "CROPPED"
}
},
{
"simpleResponse": {
"textToSpeech": "Which response would you like to see next?"
}
}
]
}
}
}
}
You can refer to the specific Rich Response JSON for your Action in the following Documentation:
https://developers.google.com/assistant/conversational/rich-responses#df-json-basic-card

Insert a card that can be directly responded to

update !important: The API has changed a lot, this question shouldn't be taken into consideration anymore
I am trying to use the REST api (via Node.js API) to create cards that the user can respond to and create an interaction in this way.
Reading the docs the creator attribute is not really specified anywhere, so I have no idea how to insert that.
Also this video doesn't help. Nor this guide =)
I believe there is an URL I should set as callback somehow? I'd like to know how to get these responses, please.
update
This is the card I am sending.
{
bundleId: 'veryuniqueBundle',
id: 'veryuniqueBundle:reply',
text: "want to hear moar?",
menuItems: [
{action: "REPLY"}
]
}
that's the response I get:
{
"collection": "timeline",
"itemId": "119c4dc8-c0ce-4a83-aa76-41aab4e8dbe1",
"operation": "INSERT",
"verifyToken": "42",
"userToken": "id:520ef63cde31145deb000001",
"userActions": [
{
"type": "REPLY"
}
]
}
The problem is, I can't see what the user responded (an text) and the reference to the original card id (or bundle) that was responded to. How can I get those
Cards do not provide a direct callback. Instead, when a user selects a menu item it causes the card to be updated with their menu selection. This change subsequently triggers a notification ping to your timeline subscription.
Follow these steps to detect a menu item selection:
Subscribe to notifications for changes in the timeline collection
{
"collection": "timeline",
"userToken": "awesome_kitty",
"verifyToken": "random_hash_to_verify_referer",
}
Insert a timeline card with a custom menu item
{
"text": "Hello world",
"menuItems": [
{
"action": "CUSTOM",
"id": "complete"
"values": [{
"displayName": "Complete",
"iconUrl": "http://example.com/icons/complete.png"
}]
}
]
}
Select the item on Glass
Receive the notification on your subscription URL
{
"collection": "timeline",
"itemId": "3hidvm0xez6r8_dacdb3103b8b604_h8rpllg",
"operation": "UPDATE",
"userToken": "harold_penguin",
"userActions": [
{
"type": "CUSTOM",
"payload": "PING"
}
]
}
Do cool stuff in your code
???
Profit

Resources