from pydialogflow_fulfillment import DialogflowResponse
from flask import Flask, request
app = Flask(__name__)
#app.route('/webhook', methods = ['POST'])
def user():
request_ = request.get_json(force=True)
print(request_)
qr = request_['queryResult']
queryText = qr['queryText']
querys = str(queryText)
print("querys")
print(querys)
if (querys == "GOOGLE_ASSISTANT_WELCOME"):
return{
"payload": {
"google": {
"expectUserResponse": True,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Choose a item"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"listSelect": {
"title": "Hello",
"items": [
{
"optionInfo": {
"key": "first title key"
},
"description": "first description",
"image": {
"url": "https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png",
"accessibilityText": "first alt"
},
"title": "first title"
},
{
"optionInfo": {
"key": "second"
},
"description": "second description",
"image": {
"url": "https://lh3.googleusercontent.com/Nu3a6F80WfixUqf_ec_vgXy_c0-0r4VLJRXjVFF_X_CIilEu8B9fT35qyTEj_PEsKw",
"accessibilityText": "second alt"
},
"title": "second title"
}
]
}
}
}
}
}
}
elif (querys == "actions_intent_OPTION"):
request_ = request.get_json(force=True)
# get json data in user request
qr = request_['queryResult']
print(request_)
queryText = qr['queryText']
querys = str(queryText)
print("querys")
print(querys)
dialogflow_response = DialogflowResponse("you r selecting"+querys)
print("Response:\n" +dialogflow_response.get_final_response())
people = dialogflow_response.get_final_response()
print(people)
return people
if __name__ == '__main__':
app.run()
OUTPUT:
querys
actions_intent_OPTION
Response:
{"fulfillmentText": "actions_intent_OPTION", "fulfillmentMessages": [], "source": "webhook", "outputContexts": [], "payload": {"google": {}}}
screenshot screenshot
Above my Code Have.I have a case where I can go back from in between conversation in Dialogflow, When tried in Google assistant, pressing back and selecting another option makes a google search instead of performing the reqired action. I am working with webhook fullfillment.
Example Scenario (same happens with me):
Google Assistant: shows list view "Choose a item"
Me: i select first title
Google Assistant : it shows "you r selecting first title" or"action_intent_OPTION"
Me: then i press back button
Google Assistant : Displays same List View (Choose a item)
Me: then i choose second title
Google Assistant : it shows "can i say that again?".
I need help with being within the context even after back press.
Me: i select first title
Google Assistant : it shows "you r selecting first title" or"action_intent_OPTION"
Me: when i choose second title
Google Assistant : you r selecting second title.
or
Is there a way to get any intents in between the conversation ?
Like, if I was in the middle of a conversation, say after 4-5 questions with google, and if I ask the second question, it is replying - "I missed that, say that again ?", instead i need the 2nd intent to work. (I have used follow up intents in this case)
Related
I want to create a ChatBot where the user (mostly) selects from Chip Suggestions.
I can't understand how to construct the Chip Suggestions in Flask.
The following yields null:
#app.route('/webhook', methods=['POST'])
def webhook():
two_chips = jsonify(fulfillment_text="This message is from Dialogflow's testing!",
fulfillment_messages=[
{
"payload": {
"richContent": [
[
{
"type": "chips",
"options": [
{
"text": "HIV Testing Schedule",
"link": "https://example.com" #Links work, but I don't want links
},
{
"link": "https://example.com",
"text": "PreP"
}
]
}
]
]
}
}])
return two_chips
Ideally, the button clicking would trigger a new action/intent and the bot would respond with more specific text. I.e. what should I replace the link field with?
This link suggests that there is a replyMetadata field, but that seems to be specific to kommunicate, not Google?
I looked flask-dialogflow, but the documentation is too sparse and conflicting for me.
Those chips which require a link, should be replaced by a list 1. List items are clickable and trigger an intent via events 2 (to make the bot respond with more specific text).
To get started, update your code to use lists and then add the event name you'd like to trigger in your code. Then add that same event name to the Events section of the intent you want to trigger.
Here is an example of what that can look like. I tested a list and clicked on a list item to triggered a test event that ran my test intent:
Are you looking for suggestion chips like the one below?
The sample payload that you have shared is from Kommunicate [Disclaimer: I am founder #kommunicate] and it is specific to Kommunicate platform for link buttons. Seems like what you are looking for is direct buttons/suggestion chips, here is the right doc from Kommunicate for this: https://docs.kommunicate.io/docs/message-types#suggested-replies
As Kommunicate supports omnichannel and multiple platforms web, android, iOS, whatsapp, LINE, facebook, etc so Kommunicate supports its own rich message payload along with Dialogflow specific payload.
For Dialogflow specific suggestion chips, use:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "These are suggestion chips."
}
},
{
"simpleResponse": {
"textToSpeech": "Which type of response would you like to see next?"
}
}
],
"suggestions": [
{
"title": "Suggestion 1"
},
{
"title": "Suggestion 2"
},
{
"title": "Suggestion 3"
}
],
"linkOutSuggestion": {
"destinationName": "Suggestion Link",
"url": "https://assistant.google.com/"
}
}
}
}
}
Source: https://developers.google.com/assistant/conversational/df-asdk/rich-responses#df-json-suggestion-chips
I'm working on a very simple Dialogflow with about 15-20 intents. All of these intents use a text response except one. The only intent that does not use a text response is called 'repeat'. The intent (repeat) should be able to repeat whatever was previously said by Google Assistant.
I've tried to set this up using Multivocal but have not been successful. When I type a command into the test simulator I'll get the initial response, but when I follow up with 'repeat' the default response of 'Not available' is returned. The webhook times out when I look at the Diagnostic Info. My sense is that I've configured something wrong because I've read these answers and not been able to solve my problem:
How to repeat last response of bot in dialogflow
Dialogflow - Repeat last sentence (voice) for Social Robot Elderly
Use multivocal libary to configure repeat intent in Dialogflow for VUI
I'm using the inline editor within Dialogflow my index.js looks like:
const Multivocal = require('multivocal');
const conf = {
Local: {
en: {
Response: {
"Action.multivocal.repeat": "Let me try again",
}
}
}
};
new Multivocal.Config.Simple( conf );
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
And my package.json includes the Multivocal dependency:
"multivocal": "^0.15.0"
My understanding based on the above SO questions is that these config values would be enough and I don't need to do any coding, but I'm clearly screwing something (many things?) up. How can I get the prior response in Google Assistant to repeat when a user says 'repeat' or something similar? Multivocal seems like a simple solution, if I can do it that way.
Additional logs:
Fulfillment request (removed project id information):
{
"responseId": "--",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"outputContexts": [
{
"name": "project info",
"parameters": {
"no-input": 0,
"no-match": 0
}
}
],
"intent": {
"name": "project info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "project info"
}
Raw API response (removed project and response id)
{
"responseId": "",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"intent": {
"name": "projects info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 4992
},
"languageCode": "en"
},
"webhookStatus": {
"code": 4,
"message": "Webhook call failed. Error: DEADLINE_EXCEEDED."
}
}
My simple intent that I've added based on the recommendation that for repeat to work on an intent it must use fulfillment and not based text response in Dialogflow
Here is my index.js file using the inline editor with suggestion to add text response in the config:
const conf = {
Local: {
en: {
Response: {
"Intent.help": [
"I'm sorry, I'm not able to help you.",
"You, John, Paul, George, and Ringo ey?"
],
"Action.multivocal.repeat": "Let me try again"
}
}
}
};
This line at the end of my index.js seems odd to me, but may be unrelated:
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
It sounds like you're triggering the Fallback Intent. You also need an Intent defined in Dialogflow that has an Action set to "multivocal.repeat". That might look something like this:
In the dialogflow directory of the npm package (or on github) you'll find a zip file with this and several other "standard" Intents that you can use with mulivocal.
Additionally, all the other Intents that you want to be repeated must use fulfillment to send the response (the library doesn't know what might be sent unless it can send it itself). The simplest way to do this is to enable fulfillment on each, and move the text responses from their Dialogflow screens into the configuration under an entry such as "Intent.name" (replacing "name" with the name of the Intent) or "Action.name" if you set an action name for them.
So your configuration might be something like
const conf = {
Local: {
en: {
Response: {
"Intent.welcome": [
"Hello there!",
"Welcome to my Action!"
],
"Action.multivocal.repeat": [
"Let me try again"
]
}
}
}
};
I am trying to sent custom payload in an dialogflot intent. When i am selecting the custom payload option available under google assistant it gives the following predefined json format : -
{
"google": {
}
}
now i am not aware about what i need to put in there in order to get a response from here. Any guide will be helpful
There are some compulsory Keys to be added in Rich Response JSON.
You must have Suggestion Chips and a Simple Response to maintain the follow-up of your Action. AoG rejects any action with missing Suggestion Chips or Follow-Up Response.
Refer to this JSON for Basic Card Response:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of a basic card."
}
},
{
"basicCard": {
"title": "Title: this is a title",
"subtitle": "This is a subtitle",
"formattedText": "This is a basic card. Text in a basic card can include \"quotes\" and\n most other unicode characters including emojis. Basic cards also support\n some markdown formatting like *emphasis* or _italics_, **strong** or\n __bold__, and ***bold itallic*** or ___strong emphasis___ as well as other\n things like line \nbreaks",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://assistant.google.com/"
}
}
],
"imageDisplayOptions": "CROPPED"
}
},
{
"simpleResponse": {
"textToSpeech": "Which response would you like to see next?"
}
}
]
}
}
}
}
You can refer to the specific Rich Response JSON for your Action in the following Documentation:
https://developers.google.com/assistant/conversational/rich-responses#df-json-basic-card
I am trying to build a chatbot with dialogflow which is able to advice books for users. But I really don't find how to build the responses in a python file. I mean, I want that if the intent is "search-book", then it will send few books depending on the gender the user said. Actually, my python file is there :
# -*- coding:utf-8 -*-
# !/usr/bin/env python
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import sys
import json
import yaml
try:
import apiai
except ImportError:
sys.path.append(
os.path.join(
os.path.dirname(os.path.realpath(__file__)),
os.pardir,
os.pardir
)
)
import apiai
CLIENT_ACCESS_TOKEN = '197ef97149d449a6962ba5bd5e488607'
def yaml_loader(filepath):
"""Loads a yaml file"""
with open(filepath, 'r') as file:
data = yaml.load(file)
return data
def yaml_dump(filepath, data):
"""Dumps data to a yaml file"""
with open(filepath, "w") as file:
yaml.dump(data, file)
def main():
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
filepath = "proxy.yaml"
data = yaml_loader(filepath)
proxy = data.get('proxy')
for proxy_protocol, proxy_host in proxy.items():
os.environ["" + proxy_protocol] = "" + proxy_host
while True:
print(u"> ", end=u"")
user_message = input()
if user_message == u"exit":
break
request = ai.text_request()
request.query = user_message
response = json.loads(request.getresponse().read())
result = response['result']
action = result.get('action')
actionIncomplete = result.get('actionIncomplete', False)
print(u"< %s" % response['result']['fulfillment']['speech'])
if action is not None:
if action == "search-book":
parameters = result['parameters']
text = parameters.get('text')
Gender = parameters.get('Gender')
print (
'text: %s, Gender: %s' %
(
text if text else "null",
Gender if Gender else "null",
)
)
if __name__ == '__main__':
main()
For Google books API I found this, and it is working:
https://github.com/hoffmann/googlebooks
I already have created an Entity called "gender" and an intent named "search-book"
what you have to do is you need to implement a webhook (a web service) for your intent.
set the url to your webhook here
then go to your intent and enable the webhook for the intent
so when some one query for your intent your webhook will get a post request with bellow josn body
{
"responseId": "ea3d77e8-ae27-41a4-9e1d-174bd461b68c",
"session": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0",
"queryResult": {
"queryText": "user's original query to your agent",
"parameters": {
"param": "param value"
},
"allRequiredParamsPresent": true,
"fulfillmentText": "Text defined in Dialogflow's console for the intent that was matched",
"fulfillmentMessages": [
{
"text": {
"text": [
"Text defined in Dialogflow's console for the intent that was matched"
]
}
}
],
"outputContexts": [
{
"name": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0/contexts/generic",
"lifespanCount": 5,
"parameters": {
"param": "param value"
}
}
],
"intent": {
"name": "projects/your-agents-project-id/agent/intents/29bcd7f8-f717-4261-a8fd-2d3e451b8af8",
"displayName": "Matched Intent Name"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {},
"languageCode": "en"
},
"originalDetectIntentRequest": {}
}
you can get the intent name
body.queryResult.intent.displayName
also you can get the parameters
body.queryResult.parameters
since now you have the parameters you need, you can call to your googlebooks api and send the result back to the google dialogflow
the responce json should be something like this
{
"fulfillmentText": "This is a text response",
"fulfillmentMessages": [
{
"card": {
"title": "card title",
"subtitle": "card text",
"imageUri": "https://assistant.google.com/static/images/molecule/Molecule-Formation-stop.png",
"buttons": [
{
"text": "button text",
"postback": "https://assistant.google.com/"
}
]
}
}
],
"source": "example.com",
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "this is a simple response"
}
}
]
}
},
"facebook": {
"text": "Hello, Facebook!"
},
"slack": {
"text": "This is a text response for Slack."
}
},
"outputContexts": [
{
"name": "projects/${PROJECT_ID}/agent/sessions/${SESSION_ID}/contexts/context name",
"lifespanCount": 5,
"parameters": {
"param": "param value"
}
}
],
"followupEventInput": {
"name": "event name",
"languageCode": "en-US",
"parameters": {
"param": "param value"
}
}
}
some thing i have done with node js
'use strict';
const http = require('http');
exports.bookWebhook = (req, res) => {
if (req.body.queryResult.intent.displayName == "search-book") {
res.json({
'fulfillmentText': getbookDetails(req.body.queryResult.parameters.gender)
});
}
};
function getbookDetails(gender) {
//do you api search for book here
return "heard this book is gooooood";
}
in getbookDetails function you can call to you api get the book and format the string and return the string.. same should be applied to python as we. only the syntax will be different.
Scenario trying to achieve :
When user says "approvals" bot has to talk to api/webhook and response with a list with title and small description
Title 1
abcd
Title 2
efgh
and the user will click select anyone out of it.
Integration type : Website integration
I would like to use nodejs to use as webhook v2 and is there any sample specific to the this .
I saw in v1 webhook there is just a option to send one text as reply . I dont know maybe it supports in v2 can anyone share some sample and information
return res.json({
speech: 'text',
displayText: 'title',
source: 'getevents'
});
You can use Quick Replies Message Object in V1.
Just reply the following:
{
'messages': [
{
'type': 2,
'platform': 'line',
'title': 'title',
'replies': [
'select one',
'select one',
]
},
]
}
In Dialogflow webhook it defines the JSON payload format when Google Actions invokes your fulfilment through Dialogflow v2. So dialogflow natively doesn't support list rich responses, one needs to apply the JSON code equipped from google actions
Here is the sample code for the list template
"messages": [
{
"items": [
{
"description": "Item One Description",
"image": {
"url": "http://imageOneUrl.com"
"accessibilityText": "Image description for screen readers"
},
"optionInfo": {
"key": "itemOne",
"synonyms": [
"thing one",
"object one"
]
},
"title": "Item One"
},
{
"description": "Item Two Description",
"image": {
"url": "http://imageTwoUrl.com"
"accessibilityText": "Image description for screen readers"
},
"optionInfo": {
"key": "itemTwo",
"synonyms": [
"thing two",
"object two"
]
},
"title": "Item Two"
}
],
"platform": "google",
"title": "Title",
"type": "list_card"
}
]
You can find out more from this source link ,
And a tutorial on how to implement this using fulfilment webhook can be found here
But if you want to avoid this hassle, you can integrate dialogflow with some third-party application such as Kommunicate to build every rich message. Where they have the means to implement rich messages using custom payload Dialogflow and Google Assistant and Kommunicate supports all types of rich messages like buttons, links, images to card carousel etc and provide sample codes for the same. For more detailed information you check this article
Disclaimer: I work for Kommunicate