Dialogflow webhook failure - dialogflow-es

I just integrated Dialogflow with Whatsapp but some of the webhooks are failing to send responses. I need a way around this problem. The telegram integration works well but the Whatsapp version fails terrible.
{
"responseId": "1d7878da-1909-4189-aa12-edd87bcbdbfd-21554733",
"queryResult": {
"queryText": "07089456758",
"action": "input.welcome",
"parameters": {
"phonenumber": "07089456758"
},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"payload": {
"telegram": {
"text": "Hi! �� Welcome to Sendme. What would you like to order today?",
"parse_mode": "html"
}
},
"platform": "TELEGRAM"
},
{
"text": {
"text": [
""
]
}
}
],
"outputContexts": [
{
"name": "projects/sendme-bot-vrgl/locations/global/agent/sessions/c5173fc0-613e-cf88-ea69-e5615b38c1dc/contexts/auth",
"lifespanCount": 5,
"parameters": {
"phonenumber.original": "07089456758",
"phonenumber": "07089456758"
}
}
],
"intent": {
"name": "projects/sendme-bot-vrgl/locations/global/agent/intents/4bf9dd88-e6dd-4ff2-9c4b-8d49bf45ef77",
"displayName": "welcome"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 4312
},
"languageCode": "en",
"sentimentAnalysisResult": {
"queryTextSentiment": {
"score": 0.4,
"magnitude": 0.4
}
}
},
"webhookStatus": {
"code": 14,
"message": "Webhook call failed. Error: UNAVAILABLE, State: URL_UNREACHABLE, Reason: UNREACHABLE_5xx, HTTP status code: 500."
},
"agentId": "6b1be60b-e9f3-4ae4-a983-5c4d7c9734f8",
"agentSettings": {
"enableAgentWideKnowledgeConnector": true
}
}

Related

get specific intent using events in Dialogflow Essentials

Trying to get specific intent using events in Dialogflow Essentials
below is the request
{
"queryInput": {
"event": {
"name": "start",
"languageCode": "en"
}
}
}
response:
{
"responseId": "4f2e-8de0-e5ae7ef17a60-32d6a6f2",
"queryResult": {
"action": "input.unknown",
"parameters": {},
"allRequiredParamsPresent": true
}
Same working when using text. Would like to make it work using event as well
{
"queryInput": {
"text": {
"text": "start",
"languageCode": "en"
}
}
}
response:
{
"responseId": "4e9b-b131-f5598b8d7f11-32d6a6f2",
"queryResult": {
"queryText": "start",
"parameters": {},
"allRequiredParamsPresent": true
}
Make sure that the intent has an event attached to it so you can detect intents based on events.
Intent configuration:
Request body:
{
"queryInput": {
"event": {
"name": "test",
"languageCode": "en"
}
}
}
Response when used detectIntent endpoint:
{
"responseId": "7330fd68-82d1-4fa5-b5a1-555a9d4f649b-32d6a6f2",
"queryResult": {
"queryText": "test",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "From API",
"fulfillmentMessages": [
{
"text": {
"text": [
"From API"
]
}
}
],
"outputContexts": [
{
"name": "projects/xxxxxx/agent/sessions/1234/contexts/__system_counters__",
"lifespanCount": 1,
"parameters": {
"no-input": 0,
"no-match": 0
}
}
],
"intent": {
"name": "projects/xxxxxx/agent/intents/05356807-86f4-4a13-8079-6745b110a4d5",
"displayName": "test intent"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
}

Facing an error while building a custom skil for amazon alexa

I am trying to a build a basic custom alexa skill. I have created an intent schema and am using AWS lambda function as the endpoint. ]
My Intent schema:
{
"interactionModel": {
"languageModel": {
"invocationName": "toit brewpub",
"modelConfiguration": {
"fallbackIntentSensitivity": {
"level": "LOW"
}
},
"intents": [
{
"name": "AMAZON.FallbackIntent",
"samples": []
},
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "GetClosingTime",
"slots": [],
"samples": [
"what time do you close",
"when is the last order",
"till what time are you open",
"What time does the pub close"
]
},
{
"name": "GetPriceOfBeer",
"slots": [
{
"name": "beer",
"type": "BEERS"
}
],
"samples": [
"how much is {beer}",
"what is the price of {beer}"
]
}
],
"types": [
{
"name": "BEERS",
"values": [
{
"name": {
"value": "Toit Red"
}
},
{
"name": {
"value": "Tiot Weiss"
}
},
{
"name": {
"value": "Basmati Blonde"
}
},
{
"name": {
"value": "Tintin Toit"
}
},
{
"name": {
"value": "IPA"
}
},
{
"name": {
"value": "Dark Knight"
}
}
]
}
]
}
}
}
I am using Node.js v 10.x for my lamda function which has been built using Alexa-Skills-NodeJS-Fact-kit, The region for my aws lambda is US_EAST- N.VIRGINIA.
Below is the request I receive when I talk to my Test Simulator:
{
"version": "1.0",
"session": {
"new": false,
"sessionId": "amzn1.echo-api.session.fd1c5315-ecf8-413f-ba25-e54bd6ae316a",
"application": {
"applicationId": "amzn1.ask.skill.72615503-5f38-4baf-b0dd-cd6edd3b6dfd"
},
"user": {
"userId": ""
}
},
"context": {
"System": {
"application": {
"applicationId": "amzn1.ask.skill.72615503-5f38-4baf-b0dd-cd6edd3b6dfd"
},
"user": {
"userId": ""
},
"device": {
"deviceId": "",
"supportedInterfaces": {}
},
"apiEndpoint": "https://api.eu.amazonalexa.com",
"apiAccessToken": ""
},
"Viewport": {
"experiences": [
{
"arcMinuteWidth": 246,
"arcMinuteHeight": 144,
"canRotate": false,
"canResize": false
}
],
"shape": "RECTANGLE",
"pixelWidth": 1024,
"pixelHeight": 600,
"dpi": 160,
"currentPixelWidth": 1024,
"currentPixelHeight": 600,
"touch": [
"SINGLE"
],
"video": {
"codecs": [
"H_264_42",
"H_264_41"
]
}
},
"Viewports": [
{
"type": "APL",
"id": "main",
"shape": "RECTANGLE",
"dpi": 160,
"presentationType": "STANDARD",
"canRotate": false,
"configuration": {
"current": {
"video": {
"codecs": [
"H_264_42",
"H_264_41"
]
},
"size": {
"type": "DISCRETE",
"pixelWidth": 1024,
"pixelHeight": 600
}
}
}
}
]
},
"request": {
"type": "SessionEndedRequest",
"requestId": "amzn1.echo-api.request.24b64895-3f90-4a5b-9805-9d3b038cd323",
"timestamp": "2020-03-29T08:59:54Z",
"locale": "en-US",
"reason": "ERROR",
"error": {
"type": "INVALID_RESPONSE",
"message": "An exception occurred while dispatching the request to the skill."
}
}
}
I have removed the user Id, device ID and access token while asking the question for security reasons.
My Lambda node js function looks like this which i have generated using the code generator :
https://github.com/shreyneil/Episodes/blob/master/amazon-echo/lambda-function.js
Url for code-generator: http://alexa.codegenerator.s3-website-us-east-1.amazonaws.com/
Url for tutorial that i was using to implement it: https://www.youtube.com/watch?v=BB3wwxgqPOU
Whenever i try to launch the event using , open toit brewpub in my test simulator it thorws an error stating :
There was a problem with the requested skill's response
Any idea on how to make this work?
Any leads would appreciated, Thank you!

Malformed Response: Failed to parse Dialogflow response into AppResponse because of empty speech response

I am using firebase function for the webhook fulfillment in Dialogflow. I am getting webhook successful as a fulfillment status but it is not working. I am using version 1. When I test it on Google Assistant simulator, it says "App is not responding".
firebase function
const functions = require('firebase-functions');
exports.webhook = functions.https.onRequest((request, response) => {
response.send({
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText":"..."
},
"contentUrl":"https://theislam360.me:8080/hbd.mp3"
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
}
)
});`
When I copy paste the response from {google... to the end in the custom payload manually via GUI, It works. While for webhook, it is not working.
RAW API RESPONSE
{
"id": "eaf627ed-26b5-4965-b0b0-bc77144e144b",
"timestamp": "2019-04-15T11:54:18.948Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "play hbd",
"action": "",
"actionIncomplete": false,
"parameters": {
"any": "hbd"
},
"contexts": [],
"metadata": {
"isFallbackIntent": "false",
"webhookResponseTime": 34,
"intentName": "play",
"intentId": "e60071cd-ce31-4ef9-ae9b-cc370c3362b3",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false"
},
"fulfillment": {
"messages": []
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "e91bd62f-766b-b19d-d37b-2917ac20caa6"
}
FULFILLMENT REQUEST
{
"id": "eaf627ed-26b5-4965-b0b0-bc77144e144b",
"timestamp": "2019-04-15T11:54:18.948Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "play hbd",
"speech": "",
"action": "",
"actionIncomplete": false,
"parameters": {
"any": "hbd"
},
"contexts": [],
"metadata": {
"intentId": "e60071cd-ce31-4ef9-ae9b-cc370c3362b3",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"isFallbackIntent": "false",
"intentName": "play"
},
"fulfillment": {
"speech": "",
"messages": []
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "e91bd62f-766b-b19d-d37b-2917ac20caa6"
}
FULFILLMENT RESPONSE
{
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Hey! Good to see you."
}
},
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "Exercises",
"description": "ex",
"largeImage": {
"url": "http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText": "..."
},
"contentUrl": "https://theislam360.me:8080/hbd.mp3"
}
]
}
}
],
"suggestions": [
{
"title": "chips"
}
]
}
}
}
FULFILLMENT STATUS
Webhook execution successful
Firebase Logs
Google Assistant Simulator Logs
You're not using the correct JSON in the response. By putting it in the GUI in the "custom payload" section, it is creating a larger JSON response for you. The google object needs to be under the data object for Dialogflow v1 or payload for Dialogflow v2. (And if you haven't switched to v2 - you should do so immediately, since v1 will be switched off in about a month.)
So what you're returning should look more like
{
"payload": {
"google": {
...
}
}
}

Dialogflow textToSpeech fulfilment not reading aloud the text

I am providing users with a response on an audio only device (e.g. google home), when I respond with a textToSpeech field within a simpleResponse, the speech is not read out in the simulator.
Has anyone experienced this and know how to fix?
I've tried different response types but none of them read out the textToSpeech field.
Also tried ticking/unticking end conversation toggle in Dialogflow and expectUserInput true/false when responding with JSON to no avail.
The response is currently fulfilled by a webhook which responds with JSON v2 fulfilment blob and the simulator receives the response with no errors but does not read it out.
RESPONSE -
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here are the 3 closest restaurants that match your criteria,"
}
}
]
}
}
}
}
REQUEST -
{
"responseId": "404f3b65-73a5-47db-9c17-0fc8b31560a5",
"queryResult": {
"queryText": "actions_intent_NEW_SURFACE",
"parameters": {},
"allRequiredParamsPresent": true,
"outputContexts": [
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/findrestaurantswithcuisineandlocation-followup",
"lifespanCount": 98,
"parameters": {
"location.original": "Shoreditch",
"cuisine.original": "international",
"cuisine": "International",
"location": {
"subadmin-area": "Shoreditch",
"subadmin-area.original": "Shoreditch",
"subadmin-area.object": {}
}
}
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_account_linking"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_audio_output"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/google_assistant_input_type_voice"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_media_response_audio"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_intent_new_surface",
"parameters": {
"text": "no",
"NEW_SURFACE": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValue",
"status": "CANCELLED"
}
}
}
],
"intent": {
"name": "projects/my-project/agent/intents/0baefc9d-689c-4c33-b2b8-4e130f626de1",
"displayName": "Send restaurants to mobile"
},
"intentDetectionConfidence": 1,
"languageCode": "en-us"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"isInSandbox": true,
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.ACCOUNT_LINKING"
}
]
},
"requestType": "SIMULATOR",
"inputs": [
{
"rawInputs": [
{
"query": "no",
"inputType": "VOICE"
}
],
"arguments": [
{
"extension": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValue",
"status": "CANCELLED"
},
"name": "NEW_SURFACE"
},
{
"rawText": "no",
"textValue": "no",
"name": "text"
}
],
"intent": "actions.intent.NEW_SURFACE"
}
],
"user": {
"userStorage": "{\"data\":{}}",
"lastSeen": "2019-04-12T14:31:23Z",
"locale": "en-US",
"userId": "userID"
},
"conversation": {
"conversationId": "sessionId",
"type": "ACTIVE",
"conversationToken": "[\"defaultwelcomeintent-followup\",\"findrestaurantswithcuisineandlocation-followup\",\"findrestaurantswithcuisineandlocation-followup-2\"]"
},
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
]
}
},
"session": "projects/my-project/agent/sessions/sessionId"
}
I expect the simulator to read out the result of textToSpeech but currently does not.

dialogflow webhook not considering payload using fulfillmentText, fulfillmentMessages

FULFILLMENT REQUEST
{
"responseId": "4955f972-058c-44c2-a9c6-fe2c1d846fcd",
"queryResult": {
"queryText": "dsnaf",
"action": "intentNotMatched",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "I think I may have misunderstood your last statement.",
"fulfillmentMessages": [
{
"text": {
"text": [
"I'm afraid I don't understand."
]
}
}
],
"outputContexts": [
{
"name": "****",
"lifespanCount": 1
}
],
"intent": {
"name": "****",
"displayName": "Default Fallback Intent",
"isFallback": true
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "****"
}
FULFILLMENT RESPONSE
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "I'm sorry. I didn't quite grasp what you just said."
}
}
]
},
"userStorage": "{\"data\":{}}"
}
},
"outputContexts": [
{
"name": "***",
"lifespanCount": 99,
"parameters": {
"data": "{}"
}
}
]
}
RAW API RESPONSE
{
"responseId": "4955f972-058c-44c2-a9c6-fe2c1d846fcd",
"queryResult": {
"queryText": "dsnaf",
"action": "intentNotMatched",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
"I'm afraid I don't understand."
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "I'm sorry. I didn't quite grasp what you just said."
}
}
]
},
"expectUserResponse": true
}
},
"outputContexts": [
{
"name": "*****",
"lifespanCount": 99,
"parameters": {
"data": "{}"
}
},
{
"name": "******",
"lifespanCount": 1
}
],
"intent": {
"name": "****",
"displayName": "Default Fallback Intent",
"isFallback": true
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 286
},
"languageCode": "en"
},
"webhookStatus": {
"message": "Webhook execution successful"
}
}
Google Assistance response
USER SAYS dsnaf
DEFAULT RESPONSE I'm afraid I don't understand.
CONTEXTS
_actions_on_google,initial_chat
INTENT Default Fallback Intent
IN Google Assistance response is default fulfillmentText instead payload google richresponse
Where are you testing this? If you're in the test console, it's always going to show you the simple text response. You'll need to specifically select Google Assistant in the test console to see the rich response for that platform:

Resources