While experimenting with webhook responses in DialogFlow I return a mixed audio and spoken response. The Actions Test Console reads it out litteraly (that is all the xml-tags are read out loud e.t.c.) but when I click the Audio tab in the same test console to find out what's wrong in the XML the test console reads/plays the sound and words correctly as if there's nothing wrong.
What could cause this?
Addendum:
This is the response I produce in Javascript:
conv.ask(`<speak>Här kommer ljudet.</speak>` +
`<speak><par><media xml:id="environment" end="effect.end"fadeOutDur="3.0s"><audio src="${ljud3}" /></media>` +
`<media xml:id="effect"><audio src="${ljud1}" begin="2.0s" /> </media></par></speak>`);}
in the audio tab in the Actions Console it looks like this and works as expected when I press "Update and listen":
<speak>Här kommer ljudet.</speak><speak><par><media xml:id="environment" end="effect.end" fadeOutDur="3.0s"><audio src="https://www.sigvardson.se/public/running_on_gravel.ogg" /></media><media xml:id="effect"><audio src="https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg" begin="2.0s" /> </media></par></speak>
and the response tabs in the console looks like this:
{"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>Här kommer ljudet.</speak><speak><par><media xml:id=\"environment\" end=\"effect.end\" fadeOutDur=\"3.0s\"><audio src=\"https://www.sigvardson.se/public/running_on_gravel.ogg\" /></media><media xml:id=\"effect\"><audio src=\"https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg\" begin=\"2.0s\" /> </media></par></speak>"
}
},
{
"simpleResponse": {
"textToSpeech": "<speak>Vill du höra <break time=\"500ms\"/> mer?</speak>"
}
}
],
"suggestions": [
{
"title": "ja"
},
{
"title": "nej"
}
]
}
}
}
}
The issue is that you have two <speak> tags in your response. If you change this to just have one <speak> tag around the entire thing, it should work better.
Related
I want to create a ChatBot where the user (mostly) selects from Chip Suggestions.
I can't understand how to construct the Chip Suggestions in Flask.
The following yields null:
#app.route('/webhook', methods=['POST'])
def webhook():
two_chips = jsonify(fulfillment_text="This message is from Dialogflow's testing!",
fulfillment_messages=[
{
"payload": {
"richContent": [
[
{
"type": "chips",
"options": [
{
"text": "HIV Testing Schedule",
"link": "https://example.com" #Links work, but I don't want links
},
{
"link": "https://example.com",
"text": "PreP"
}
]
}
]
]
}
}])
return two_chips
Ideally, the button clicking would trigger a new action/intent and the bot would respond with more specific text. I.e. what should I replace the link field with?
This link suggests that there is a replyMetadata field, but that seems to be specific to kommunicate, not Google?
I looked flask-dialogflow, but the documentation is too sparse and conflicting for me.
Those chips which require a link, should be replaced by a list 1. List items are clickable and trigger an intent via events 2 (to make the bot respond with more specific text).
To get started, update your code to use lists and then add the event name you'd like to trigger in your code. Then add that same event name to the Events section of the intent you want to trigger.
Here is an example of what that can look like. I tested a list and clicked on a list item to triggered a test event that ran my test intent:
Are you looking for suggestion chips like the one below?
The sample payload that you have shared is from Kommunicate [Disclaimer: I am founder #kommunicate] and it is specific to Kommunicate platform for link buttons. Seems like what you are looking for is direct buttons/suggestion chips, here is the right doc from Kommunicate for this: https://docs.kommunicate.io/docs/message-types#suggested-replies
As Kommunicate supports omnichannel and multiple platforms web, android, iOS, whatsapp, LINE, facebook, etc so Kommunicate supports its own rich message payload along with Dialogflow specific payload.
For Dialogflow specific suggestion chips, use:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "These are suggestion chips."
}
},
{
"simpleResponse": {
"textToSpeech": "Which type of response would you like to see next?"
}
}
],
"suggestions": [
{
"title": "Suggestion 1"
},
{
"title": "Suggestion 2"
},
{
"title": "Suggestion 3"
}
],
"linkOutSuggestion": {
"destinationName": "Suggestion Link",
"url": "https://assistant.google.com/"
}
}
}
}
}
Source: https://developers.google.com/assistant/conversational/df-asdk/rich-responses#df-json-suggestion-chips
I'm working on a very simple Dialogflow with about 15-20 intents. All of these intents use a text response except one. The only intent that does not use a text response is called 'repeat'. The intent (repeat) should be able to repeat whatever was previously said by Google Assistant.
I've tried to set this up using Multivocal but have not been successful. When I type a command into the test simulator I'll get the initial response, but when I follow up with 'repeat' the default response of 'Not available' is returned. The webhook times out when I look at the Diagnostic Info. My sense is that I've configured something wrong because I've read these answers and not been able to solve my problem:
How to repeat last response of bot in dialogflow
Dialogflow - Repeat last sentence (voice) for Social Robot Elderly
Use multivocal libary to configure repeat intent in Dialogflow for VUI
I'm using the inline editor within Dialogflow my index.js looks like:
const Multivocal = require('multivocal');
const conf = {
Local: {
en: {
Response: {
"Action.multivocal.repeat": "Let me try again",
}
}
}
};
new Multivocal.Config.Simple( conf );
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
And my package.json includes the Multivocal dependency:
"multivocal": "^0.15.0"
My understanding based on the above SO questions is that these config values would be enough and I don't need to do any coding, but I'm clearly screwing something (many things?) up. How can I get the prior response in Google Assistant to repeat when a user says 'repeat' or something similar? Multivocal seems like a simple solution, if I can do it that way.
Additional logs:
Fulfillment request (removed project id information):
{
"responseId": "--",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"outputContexts": [
{
"name": "project info",
"parameters": {
"no-input": 0,
"no-match": 0
}
}
],
"intent": {
"name": "project info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "project info"
}
Raw API response (removed project and response id)
{
"responseId": "",
"queryResult": {
"queryText": "repeat",
"action": "multivocal.repeat",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"intent": {
"name": "projects info",
"displayName": "repeat"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 4992
},
"languageCode": "en"
},
"webhookStatus": {
"code": 4,
"message": "Webhook call failed. Error: DEADLINE_EXCEEDED."
}
}
My simple intent that I've added based on the recommendation that for repeat to work on an intent it must use fulfillment and not based text response in Dialogflow
Here is my index.js file using the inline editor with suggestion to add text response in the config:
const conf = {
Local: {
en: {
Response: {
"Intent.help": [
"I'm sorry, I'm not able to help you.",
"You, John, Paul, George, and Ringo ey?"
],
"Action.multivocal.repeat": "Let me try again"
}
}
}
};
This line at the end of my index.js seems odd to me, but may be unrelated:
exports.webhook = Multivocal.processFirebaseWebhook;
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
It sounds like you're triggering the Fallback Intent. You also need an Intent defined in Dialogflow that has an Action set to "multivocal.repeat". That might look something like this:
In the dialogflow directory of the npm package (or on github) you'll find a zip file with this and several other "standard" Intents that you can use with mulivocal.
Additionally, all the other Intents that you want to be repeated must use fulfillment to send the response (the library doesn't know what might be sent unless it can send it itself). The simplest way to do this is to enable fulfillment on each, and move the text responses from their Dialogflow screens into the configuration under an entry such as "Intent.name" (replacing "name" with the name of the Intent) or "Action.name" if you set an action name for them.
So your configuration might be something like
const conf = {
Local: {
en: {
Response: {
"Intent.welcome": [
"Hello there!",
"Welcome to my Action!"
],
"Action.multivocal.repeat": [
"Let me try again"
]
}
}
}
};
So I have been trying to start off with creating chatbots on dialogflow. The issue I am running into is that my chatbot is meant to give users a bunch of options to select from and to take the conversation further from there. In order to implement that, I've used a suggestion chip I've included the JSON file that I am using. However, when testing, the bot detects the right intent but returns an empty response. I've included the code in case that helps.
{
"richContent": [
[
{
"options": [
{
"text": "Chip 1",
"image": {
"src": {
"rawUrl": "https://example.com/images/logo.png"
}
},
"link": "https://example.com"
},
{
"link": "https://example.com",
"text": "Chip 2",
"image": {
"src": {
"rawUrl": "https://example.com/images/logo.png"
}
}
}
],
"type": "chips"
}
]
]
}
If you are testing it in the console, you definitely won't get any response, since there is some issue with the way console works. Use dialogflow messenger to test it. Anyways in the actual production you won't be using simulator to test it. Dialogflow console is a simulator which has some capability restrictions but it will work when you use the messenger.
I have created a simple intent in DialogFlow where the user asks 'how old someone is' (ex. "How old is David Beckham?").
This is then sent to a cloud run web hook that returns a response: "How am I meant to know?".
When i test this in the DialogFlow console it works fine and the JSON response is returned from Cloud Run to Dialogflow:
FULFILLMENT RESPONSE
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "How am I meant to know?!"
}
}
]
}
}
}
The problem is that DialogFlow is not responding with "How am I meant to know?" in the console. its not responding with anything.
Is there something else i need to do for this to happen? I assumed this would happen automatically.
That response is only for Google Assistant, you need to add a "default" response in this format:
{
"fulfillmentMessages": [
{
"text": {
"text": [
"Text response from webhook"
]
}
}
]
}
Every integration has a different format response but they could be used together
Check the docs:
https://cloud.google.com/dialogflow/docs/fulfillment-webhook
I am trying to sent custom payload in an dialogflot intent. When i am selecting the custom payload option available under google assistant it gives the following predefined json format : -
{
"google": {
}
}
now i am not aware about what i need to put in there in order to get a response from here. Any guide will be helpful
There are some compulsory Keys to be added in Rich Response JSON.
You must have Suggestion Chips and a Simple Response to maintain the follow-up of your Action. AoG rejects any action with missing Suggestion Chips or Follow-Up Response.
Refer to this JSON for Basic Card Response:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here's an example of a basic card."
}
},
{
"basicCard": {
"title": "Title: this is a title",
"subtitle": "This is a subtitle",
"formattedText": "This is a basic card. Text in a basic card can include \"quotes\" and\n most other unicode characters including emojis. Basic cards also support\n some markdown formatting like *emphasis* or _italics_, **strong** or\n __bold__, and ***bold itallic*** or ___strong emphasis___ as well as other\n things like line \nbreaks",
"image": {
"url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
"accessibilityText": "Image alternate text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://assistant.google.com/"
}
}
],
"imageDisplayOptions": "CROPPED"
}
},
{
"simpleResponse": {
"textToSpeech": "Which response would you like to see next?"
}
}
]
}
}
}
}
You can refer to the specific Rich Response JSON for your Action in the following Documentation:
https://developers.google.com/assistant/conversational/rich-responses#df-json-basic-card