I am working on a dialogflow chatbot, I am on the "essential enterprise" plan and therefore I have access to sentiment analysis which works fine from the simulator panel on the left of dialogflow console, however when I switch to testing my bot on google assistant, dialogflow fulfillment code stops extracting sentiment score from the request sent by dialogflow.
Looking into the request sent in both cases, google assistant and plain dialogflow, I see that "sentiment analysis" results are in fact missing from the request object once you switch to testing on google assistant.
The two request objects are as the following:
Here is the first dialogflow request, not using google assistant (notice queryTextSentiment node near the bottom )
{
"responseId": "b76b18c6-7640-4322-b8e5-2db74cc22656-b55300fa",
"queryResult": {
"queryText": "Very difficult",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "how do you describe the assignments of this course?",
"fulfillmentMessages": [
{
"text": {
"text": [
"how do you describe the assignments of this course?"
]
}
}
],
"outputContexts": [
{
"name": "projects/labeeb-nlddnb/agent/sessions/d66952d9-05c9-bf8a-e083-fa7ccc0bebdd/contexts/assignments_ctx",
"lifespanCount": 5
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/d66952d9-05c9-bf8a-e083-fa7ccc0bebdd/contexts/difficulty_ctx",
"lifespanCount": 4,
"parameters": {
"course_name": "Machine Learning",
"course_name.original": "Machine Learning"
}
}
],
"intent": {
"name": "projects/labeeb-nlddnb/agent/intents/d2d12691-624a-43cd-8b88-b3c7116831dd",
"displayName": "difficulty"
},
"intentDetectionConfidence": 1,
"languageCode": "en",
"sentimentAnalysisResult": {
"queryTextSentiment": {
"score": -0.5,
"magnitude": 0.5
}
}
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "projects/labeeb-nlddnb/agent/sessions/d66952d9-05c9-bf8a-e083-fa7ccc0bebdd"
}
and here is the request dialogflow sends when I test from google assistant, and dialogflow doesn't set any sentiment analysis results in this case:
{
"responseId": "ce8600dc-4364-48c2-a85b-3acb4cab589e-b55300fa",
"queryResult": {
"queryText": "Very difficult",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "how do you describe the assignments of this course?",
"fulfillmentMessages": [
{
"text": {
"text": [
"how do you describe the assignments of this course?"
]
}
}
],
"outputContexts": [
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/assignments_ctx",
"lifespanCount": 5
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/actions_capability_account_linking"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/actions_capability_media_response_audio"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/actions_capability_audio_output"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/actions_capability_web_browser"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/actions_capability_screen_output"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/google_assistant_input_type_keyboard"
},
{
"name": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ/contexts/difficulty_ctx",
"lifespanCount": 4,
"parameters": {
"course_name": "Machine Learning",
"course_name.original": "Machine Learning"
}
}
],
"intent": {
"name": "projects/labeeb-nlddnb/agent/intents/d2d12691-624a-43cd-8b88-b3c7116831dd",
"displayName": "difficulty"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"user": {
"locale": "en-US",
"lastSeen": "2019-07-24T16:47:07Z",
"userVerificationStatus": "VERIFIED"
},
"conversation": {
"conversationId": "ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ",
"type": "ACTIVE",
"conversationToken": "[\"difficulty_ctx\"]"
},
"inputs": [
{
"intent": "actions.intent.TEXT",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Very difficult"
}
],
"arguments": [
{
"name": "text",
"rawText": "Very difficult",
"textValue": "Very difficult"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.ACCOUNT_LINKING"
},
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
],
"requestType": "SIMULATOR"
}
},
"session": "projects/labeeb-nlddnb/agent/sessions/ABwppHHtMXYQddZeGRTOy0mDfanYvokXr8s72lTD9omqiMy73G3B0JaA0DwvErTRc0HkvhPTmA-CIcAXuQ"
}
Does anyone have an explanation for that?
Sentiment Analysis is not supported for queries originating from Google Assistant.
You can find this statement in the Settings for your Dialogflow Chatbot on Dialogflow.com in the "Advanced Tab" (just below the Toggle to enable Sentiment Analysis).
In fact, sentiment is not logged either, even outside of G Assistant. Here is the log from Operations->Logging in GCP, which is where Dialogflow interactions are logged when you turn on logging. The log there is not the same as the "Raw Interaction Log" that you get from the Dialogflow UI, unfortunately. It's less detailed.
Note that the interesting stuff is all packed in textPayload - in a very hard to parse text string. All that's there is a "Score" variable, which is the confidence score. Thanks goodness that's in there at the least!
Only other way to grab the sentiment score would be to create a custom fulfillment that runs on every intent detect (which is a bummer b/c it'll slow everything down a bit) - and that logs the score separately.
:(
{
"textPayload": "Dialogflow Response : id: \"100aea48-26f8-46b5-ab3a-e48febb0dd1f-e13762d2\"\nlang: \"en\"\nsession_id: \"2e63bd78-79f3-7b02-6efd-5d9f96e1c382\"\ntimestamp: \"2020-06-28T14:08:49.563Z\"\nresult {\n source: \"agent\"\n resolved_query: \"thank you bye\"\n score: 0.68635064\n parameters {\n }\n contexts {\n name: \"__system_counters__\"\n lifespan: 1\n parameters {\n fields {\n key: \"no-input\"\n value {\n number_value: 0.0\n }\n }\n fields {\n key: \"no-match\"\n value {\n number_value: 0.0\n }\n }\n }\n }\n metadata {\n intent_id: \"e5b0d316-d0a0-40c4-b0c3-b0565712aee4\"\n intent_name: \"Closing\"\n webhook_used: \"false\"\n webhook_for_slot_filling_used: \"false\"\n is_fallback_intent: \"false\"\n }\n fulfillment {\n speech: \"closing intent matched\"\n messages {\n lang: \"en\"\n type {\n number_value: 0.0\n }\n speech {\n string_value: \"closing intent matched\"\n }\n }\n }\n}\nstatus {\n code: 200\n error_type: \"success\"\n}\n",
"insertId": "1k1ucxsfhckbar",
"resource": {
"type": "global",
"labels": {
"project_id": "qualitymanagement-eooojp"
}
},
"timestamp": "2020-06-28T14:08:49.574Z",
"severity": "INFO",
"labels": {
"protocol": "V2",
"request_id": "100aea48-26f8-46b5-ab3a-e48febb0dd1f-e13762d2",
"type": "dialogflow_response"
},
"logName": "projects/qualitymanagement-eooojp/logs/dialogflow_agent",
"trace": "2e63bd78-79f3-7b02-6efd-5d9f96e1c382",
"receiveTimestamp": "2020-06-28T14:08:49.589705622Z"
}
Related
Trying to get specific intent using events in Dialogflow Essentials
below is the request
{
"queryInput": {
"event": {
"name": "start",
"languageCode": "en"
}
}
}
response:
{
"responseId": "4f2e-8de0-e5ae7ef17a60-32d6a6f2",
"queryResult": {
"action": "input.unknown",
"parameters": {},
"allRequiredParamsPresent": true
}
Same working when using text. Would like to make it work using event as well
{
"queryInput": {
"text": {
"text": "start",
"languageCode": "en"
}
}
}
response:
{
"responseId": "4e9b-b131-f5598b8d7f11-32d6a6f2",
"queryResult": {
"queryText": "start",
"parameters": {},
"allRequiredParamsPresent": true
}
Make sure that the intent has an event attached to it so you can detect intents based on events.
Intent configuration:
Request body:
{
"queryInput": {
"event": {
"name": "test",
"languageCode": "en"
}
}
}
Response when used detectIntent endpoint:
{
"responseId": "7330fd68-82d1-4fa5-b5a1-555a9d4f649b-32d6a6f2",
"queryResult": {
"queryText": "test",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "From API",
"fulfillmentMessages": [
{
"text": {
"text": [
"From API"
]
}
}
],
"outputContexts": [
{
"name": "projects/xxxxxx/agent/sessions/1234/contexts/__system_counters__",
"lifespanCount": 1,
"parameters": {
"no-input": 0,
"no-match": 0
}
}
],
"intent": {
"name": "projects/xxxxxx/agent/intents/05356807-86f4-4a13-8079-6745b110a4d5",
"displayName": "test intent"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
}
I am using firebase function for the webhook fulfillment in Dialogflow. I am getting webhook successful as a fulfillment status but it is not working. I am using version 1. When I test it on Google Assistant simulator, it says "App is not responding".
firebase function
const functions = require('firebase-functions');
exports.webhook = functions.https.onRequest((request, response) => {
response.send({
"google":{
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Hey! Good to see you."
}
},
{
"mediaResponse":{
"mediaType":"AUDIO",
"mediaObjects":[
{
"name":"Exercises",
"description":"ex",
"largeImage":{
"url":"http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText":"..."
},
"contentUrl":"https://theislam360.me:8080/hbd.mp3"
}
]
}
}
],
"suggestions":[
{
"title":"chips"
}
]
}
}
}
)
});`
When I copy paste the response from {google... to the end in the custom payload manually via GUI, It works. While for webhook, it is not working.
RAW API RESPONSE
{
"id": "eaf627ed-26b5-4965-b0b0-bc77144e144b",
"timestamp": "2019-04-15T11:54:18.948Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "play hbd",
"action": "",
"actionIncomplete": false,
"parameters": {
"any": "hbd"
},
"contexts": [],
"metadata": {
"isFallbackIntent": "false",
"webhookResponseTime": 34,
"intentName": "play",
"intentId": "e60071cd-ce31-4ef9-ae9b-cc370c3362b3",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false"
},
"fulfillment": {
"messages": []
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "e91bd62f-766b-b19d-d37b-2917ac20caa6"
}
FULFILLMENT REQUEST
{
"id": "eaf627ed-26b5-4965-b0b0-bc77144e144b",
"timestamp": "2019-04-15T11:54:18.948Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "play hbd",
"speech": "",
"action": "",
"actionIncomplete": false,
"parameters": {
"any": "hbd"
},
"contexts": [],
"metadata": {
"intentId": "e60071cd-ce31-4ef9-ae9b-cc370c3362b3",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"isFallbackIntent": "false",
"intentName": "play"
},
"fulfillment": {
"speech": "",
"messages": []
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "e91bd62f-766b-b19d-d37b-2917ac20caa6"
}
FULFILLMENT RESPONSE
{
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Hey! Good to see you."
}
},
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "Exercises",
"description": "ex",
"largeImage": {
"url": "http://res.freestockphotos.biz/pictures/17/17903-balloons-pv.jpg",
"accessibilityText": "..."
},
"contentUrl": "https://theislam360.me:8080/hbd.mp3"
}
]
}
}
],
"suggestions": [
{
"title": "chips"
}
]
}
}
}
FULFILLMENT STATUS
Webhook execution successful
Firebase Logs
Google Assistant Simulator Logs
You're not using the correct JSON in the response. By putting it in the GUI in the "custom payload" section, it is creating a larger JSON response for you. The google object needs to be under the data object for Dialogflow v1 or payload for Dialogflow v2. (And if you haven't switched to v2 - you should do so immediately, since v1 will be switched off in about a month.)
So what you're returning should look more like
{
"payload": {
"google": {
...
}
}
}
I am providing users with a response on an audio only device (e.g. google home), when I respond with a textToSpeech field within a simpleResponse, the speech is not read out in the simulator.
Has anyone experienced this and know how to fix?
I've tried different response types but none of them read out the textToSpeech field.
Also tried ticking/unticking end conversation toggle in Dialogflow and expectUserInput true/false when responding with JSON to no avail.
The response is currently fulfilled by a webhook which responds with JSON v2 fulfilment blob and the simulator receives the response with no errors but does not read it out.
RESPONSE -
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Here are the 3 closest restaurants that match your criteria,"
}
}
]
}
}
}
}
REQUEST -
{
"responseId": "404f3b65-73a5-47db-9c17-0fc8b31560a5",
"queryResult": {
"queryText": "actions_intent_NEW_SURFACE",
"parameters": {},
"allRequiredParamsPresent": true,
"outputContexts": [
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/findrestaurantswithcuisineandlocation-followup",
"lifespanCount": 98,
"parameters": {
"location.original": "Shoreditch",
"cuisine.original": "international",
"cuisine": "International",
"location": {
"subadmin-area": "Shoreditch",
"subadmin-area.original": "Shoreditch",
"subadmin-area.object": {}
}
}
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_account_linking"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_audio_output"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/google_assistant_input_type_voice"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_capability_media_response_audio"
},
{
"name": "projects/my-project/agent/sessions/sessionId/contexts/actions_intent_new_surface",
"parameters": {
"text": "no",
"NEW_SURFACE": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValue",
"status": "CANCELLED"
}
}
}
],
"intent": {
"name": "projects/my-project/agent/intents/0baefc9d-689c-4c33-b2b8-4e130f626de1",
"displayName": "Send restaurants to mobile"
},
"intentDetectionConfidence": 1,
"languageCode": "en-us"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"isInSandbox": true,
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.ACCOUNT_LINKING"
}
]
},
"requestType": "SIMULATOR",
"inputs": [
{
"rawInputs": [
{
"query": "no",
"inputType": "VOICE"
}
],
"arguments": [
{
"extension": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValue",
"status": "CANCELLED"
},
"name": "NEW_SURFACE"
},
{
"rawText": "no",
"textValue": "no",
"name": "text"
}
],
"intent": "actions.intent.NEW_SURFACE"
}
],
"user": {
"userStorage": "{\"data\":{}}",
"lastSeen": "2019-04-12T14:31:23Z",
"locale": "en-US",
"userId": "userID"
},
"conversation": {
"conversationId": "sessionId",
"type": "ACTIVE",
"conversationToken": "[\"defaultwelcomeintent-followup\",\"findrestaurantswithcuisineandlocation-followup\",\"findrestaurantswithcuisineandlocation-followup-2\"]"
},
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
]
}
},
"session": "projects/my-project/agent/sessions/sessionId"
}
I expect the simulator to read out the result of textToSpeech but currently does not.
FULFILLMENT REQUEST
{
"responseId": "4955f972-058c-44c2-a9c6-fe2c1d846fcd",
"queryResult": {
"queryText": "dsnaf",
"action": "intentNotMatched",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "I think I may have misunderstood your last statement.",
"fulfillmentMessages": [
{
"text": {
"text": [
"I'm afraid I don't understand."
]
}
}
],
"outputContexts": [
{
"name": "****",
"lifespanCount": 1
}
],
"intent": {
"name": "****",
"displayName": "Default Fallback Intent",
"isFallback": true
},
"intentDetectionConfidence": 1,
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "****"
}
FULFILLMENT RESPONSE
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "I'm sorry. I didn't quite grasp what you just said."
}
}
]
},
"userStorage": "{\"data\":{}}"
}
},
"outputContexts": [
{
"name": "***",
"lifespanCount": 99,
"parameters": {
"data": "{}"
}
}
]
}
RAW API RESPONSE
{
"responseId": "4955f972-058c-44c2-a9c6-fe2c1d846fcd",
"queryResult": {
"queryText": "dsnaf",
"action": "intentNotMatched",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
"I'm afraid I don't understand."
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "I'm sorry. I didn't quite grasp what you just said."
}
}
]
},
"expectUserResponse": true
}
},
"outputContexts": [
{
"name": "*****",
"lifespanCount": 99,
"parameters": {
"data": "{}"
}
},
{
"name": "******",
"lifespanCount": 1
}
],
"intent": {
"name": "****",
"displayName": "Default Fallback Intent",
"isFallback": true
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 286
},
"languageCode": "en"
},
"webhookStatus": {
"message": "Webhook execution successful"
}
}
Google Assistance response
USER SAYS dsnaf
DEFAULT RESPONSE I'm afraid I don't understand.
CONTEXTS
_actions_on_google,initial_chat
INTENT Default Fallback Intent
IN Google Assistance response is default fulfillmentText instead payload google richresponse
Where are you testing this? If you're in the test console, it's always going to show you the simple text response. You'll need to specifically select Google Assistant in the test console to see the rich response for that platform:
How do to identify from which platform the message came?
I want to support different platforms like Telegram and Facebook Messenger, When my webhook receive a message, I want to reply according to the platform the message came form.
For example, if the message came from Telegram I want to return a text message but if the message came from messenger I want to return a card.
You have a property source in originalRequest object, see fulfillment docs here.
{
"lang": "en",
"status": {
"errorType": "success",
"code": 200
},
"timestamp": "2017-02-09T16:06:01.908Z",
"sessionId": "1486656220806",
"result": {
"parameters": {
"city": "Rome",
"name": "Ana"
},
"contexts": [],
"resolvedQuery": "my name is Ana and I live in Rome",
"source": "agent",
"score": 1.0,
"speech": "",
"fulfillment": {
"messages": [
{
"speech": "Hi Ana! Nice to meet you!",
"type": 0
}
],
"speech": "Hi Ana! Nice to meet you!"
},
"actionIncomplete": false,
"action": "greetings",
"metadata": {
"intentId": "9f41ef7c-82fa-42a7-9a30-49a93e2c14d0",
"webhookForSlotFillingUsed": "false",
"intentName": "greetings",
"webhookUsed": "true"
}
},
"id": "ab30d214-f4bb-4cdd-ae36-31caac7a6693",
"originalRequest": {
"source": "google",
"data": {
"inputs": [
{
"raw_inputs": [
{
"query": "my name is Ana and I live in Rome",
"input_type": 2
}
],
"intent": "assistant.intent.action.TEXT",
"arguments": [
{
"text_value": "my name is Ana and I live in Rome",
"raw_text": "my name is Ana and I live in Rome",
"name": "text"
}
]
}
],
"user": {
"user_id": "PuQndWs1OMjUYwVJMYqwJv0/KT8satJHAUQGiGPDQ7A="
},
"conversation": {
"conversation_id": "1486656220806",
"type": 2,
"conversation_token": "[]"
}
}
} }