AWS Lex Lambda return multiple lines with Python - python-3.x

I've been reading the AWS Lex / Lambda docs and looking at the examples.
I don't see a way to return multiple lines.
I want to create an intent that when a user types 'Help' It gives me an output like below.
Options:
Deploy new instance.
Undeploy instance.
List instances.
I've tried this:
def lambda_handler(event, context):
logger.debug('event.bot.name={}'.format(event['bot']['name']))
a = {
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": "Options: \nDeploy instance.\nUndeploy instance."
}
}
}
return a

How a message is displayed to the user completely depends on the output Channel you are using.
\n works well in Facebook and Slack that I know of.
The Lex Console Test Chat has its own unique formatting for displaying the Lex output, so its not very reliable for testing the formatting of a message. Its really only good for quick tests to make sure your bot responds without errors. And for a glimpse at the Lex JSON response.
Each output Channel will receive the Lex JSON response and display it in its own way, so the only reliable way to test message formatting, links, images, and response cards is to test it in the actual Channel.

Related

AWS Lex V1 send a message bubble the first time an intent is triggered

I currently have a Lex V1 bot with two intents. I want the bot to send a message before the first slot prompt is sent. I thought this could be done either by sending a message the first time the intent is triggered, or sending multiple prompts (if that is even possible).
For instance the user would say something like:
I want to change major
and the bot would respond with 2 messages:
Warning: Changing this could impact you, advising is recommend
Would you like to schedule advising?
I essentially just want to split this into two distinct message bubbles.
I saw this post for sending multiple response messages, but how do I send multiple prompt ones in a Lex V1 bot?
If you use a lambda code hook for initialization and validation you can send multiple messages as a list which will display as seperate messages. You want to set the type to ElicitSlot. The response syntax in Lex V2 from the lambda would look like this:
def lambda_handler(event, context):
return {'sessionState': {
'dialogAction': {
'type': 'ElicitSlot',
'slotToElicit': 'NAME_OF_YOUR_SLOT'},
},
'intent': event['sessionState']['intent'],
'messages': [{'contentType': 'PlainText',
'content': 'PUT FIRST MESSAGE HERE'},
{'contentType': 'PlainText',
'content': 'PUT SECOND MESSAGE HERE'}],
}
It will be slightly different in Lex V1

Mocking LUIS response with LuisRecognizer not working

I am trying to mock calls to LUIS via nock, which uses the LuisRecognizer from botbuilder-ai. Here is the relevant information.
The bot itself is calling LUIS and getting the result via const recognizerResult = await this.dispatchRecognizer.recognize(context);. I grabbed the actual result as below:
{"text":"I want to look up my order","intents":{"viewOrder":{"score":0.996454835},"srStatus":{"score":0.0172454268},"expediteOrder":{"score":0.0108480565},"escalate":{"score":0.007967358},"qna":{"score":0.00694736559},"Utilities_Cancel":{"score":0.005627355},"manageProfile":{"score":0.004953466},"getPricing":{"score":0.001781322},"Utilities_Help":{"score":0.0007197641},"getAvailability":{"score":0.0005667514},"None":{"score":0.000321137835}},"entities":{"$instance":{}},"sentiment":{"label":"negative","score":0.171873689},"luisResult":{"query":"I want to look up my order","topScoringIntent":{"intent":"viewOrder","score":0.996454835},"intents":[{"intent":"viewOrder","score":0.996454835},{"intent":"srStatus","score":0.0172454268},{"intent":"expediteOrder","score":0.0108480565},{"intent":"escalate","score":0.007967358},{"intent":"qna","score":0.00694736559},{"intent":"Utilities.Cancel","score":0.005627355},{"intent":"manageProfile","score":0.004953466},{"intent":"getPricing","score":0.001781322},{"intent":"Utilities.Help","score":0.0007197641},{"intent":"getAvailability","score":0.0005667514},{"intent":"None","score":0.000321137835}],"entities":[],"sentimentAnalysis":{"label":"negative","score":0.171873689}}}
For the sake of brevity, I'll just call this "recognizerResult" in the below. I'm successfully intercepting the API call in my test file with nock, with the configuration as follows:
nock('https://westus.api.cognitive.microsoft.com')
.post(/.*/)
.reply(200,{recognizerResult});
I've tried returning both as a JSON object and a string, though I'm almost certain this needs to be JSON object as shown (I'm mocking a call to QnA maker with the same approach that is working). When I run this test via mocha, I get the following error:
TypeError: Cannot read property 'replace' of undefined
at LuisRecognizerV2.normalizeName (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:96:21)
at luisResult.intents.reduce (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:104:31)
at Array.reduce (<anonymous>)
at LuisRecognizerV2.getIntents (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:102:32)
at LuisRecognizerV2.<anonymous> (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:81:27)
at Generator.next (<anonymous>)
at fulfilled (node_modules\botbuilder-ai\lib\luisRecognizerOptionsV2.js:11:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
I've looked at the code in question within the luisRecognizerOptionsV2.ts file, but can't see where there's an issue. The replace is part of normalizing the intent name, which is there to replace unsupported characters with an "_". The bot runs normally when deployed to Azure (and locally), and the tests work without mocking the call. However, I really want to be able to test this without making actual LUIS calls. Any ideas why I am getting this error and how to fix?
For reference, here is the mock to QnA Maker that is working, though note that I'm using a simple REST call for that instead of the recognizer.
nock('https://myqnaservicename.azurewebsites.net')
.post(/.*/)
.reply(200, {"answers": [{"questions": ["I need an unrecognized utterance for testing"], "answer": "I can hear you now!", "score": 28.48, "id": 1234}]});
The issue is that your {recognizerResult} is what gets saved to const recognizerResult, but is not what gets returned by that API call.
It takes a lot of digging to find it all, but a V2 LUIS client gets the API response, then converts it into recognizerResult.
You've got a few options for "fixing" this:
Set a breakpoint in that node_modules\botbuilder-ai\src\luisRecognizerOptionsV2 file on that const result = line and grab luisResult.
Use something like Fiddler to record the actual API response and use that
Write it manually
For reference, you can see how we do this in our tests:
nock()
Recorded response
You can see that our nock() returns response.v2, which does not contain .topScoringIntent, which is what it's looking for, which is why the error is throwing.
Specifically, the mock response needs to be just the v2/luisResults attributes. In other words, when using the luisRecognizer, the response set in nock needs to be
.reply(200,{ "query": "Sample query", "topScoringIntent": { "intent": "desiredIntent", "score":1}, "entities":[]});
If you look at the test data linked above, there are other attributes in the actual response. But this is the minimum required response if you are just trying to get topIntent to test routing. If you needed other attributes you could add them, e.g. you could add everything within v2 as in this file or some of the more involved files with things like multiple intents.

How to extract postback data from payload to parameters in Dialogflow V2

I'm stuck in trying to figure this out and I hope someone out there can help me out. I am using the Dialogflow console to create a bot that requests a user to report "something" by providing his/her location and describing the incident. The bot is integrated with Facebook Messenger. One of my intents has a follow up intent which also has a follow up intent like:
intent 1
|
intent 2
| intent 3
Intent 1 requests for the user's location, intent 2 retrieves the user's location and asks the user to describe the location. Intent 3 SHOULD have all the data in context as it's fulfilled by the a webhook. All the data SHOULD be posted to my server. The problem is that I have failed to get the location data (maybe lat and long) I notice that the data comes back in the following format after the fired event FACEBOOK_LOCATION:
{
"originalDetectIntentRequest": {
"source": "facebook",
"payload": {
"postback": {
"data": {
"lat": 14.556761479425,
"long": 121.05444780425
},
"payload": "FACEBOOK_LOCATION"
},
"sender": {
"id": "1588949991188331"
}
}
}
My question is how to I carry that payload data into my Dialogflow Intent Parameters so that they are carried in context until my webhook is fired? I hope i've explained it well. Thanks for the help guys.
You can use the output contexts to save the parameters.
{
"fulfillmentText":"This is a text response",
"fulfillmentMessages":[ ],
"source":"example.com",
"payload":{
"google":{ },
"facebook":{ },
"slack":{ }
},
"outputContexts":[
{
"name":"context name",
"lifespanCount":5,
"parameters":{
"param":"param value"
}
}
],
"followupEventInput":{ }
}
Once you save the parameters, in the subsequent requests, you can access the parameters by accessing saved context. The lifespanCount will decide how many subsequent calls this context is valid. So in the above, eg. parameters saved in intent 1 will be available till intent 5 (if you have 2 more follow up intents)
You can follow more details here.
I personally like to use the client library to develop webhooks as they are easy to use, featureful and reduces JSON manipulation errors. If you like to use NodeJs based client, you can follow this link.
To expand on Abhinav's answer (and point out what caught me up on this issue). You need to make sure that the entities you extracted have the lifespan to make it to your webhook fulfillment call.
You can adjust the count by editing the number and saving.
The lifespanCount will decide how many subsequent calls this context is valid. - Abhinav
If your parameters are not showing up in your output context they probably don't have the appropriate lifespan.

Is it possible to trigger an intent based on the response(from webhook) of another intent?

I have an intent named "intent.address" with the action name "address_action" and the training phrase "My address". When this intent is triggered, response comes from my webhook saying "Alright! your address is USER_ADDRESS".
Used app.ask() here
What I want is, when this response comes from the webhook then another intent named "intent.conversation" (event name "actions_intent_CONFIRMATION")(here also webhook enabled) gets triggered, which will ask for user confirmation to continue or not?
For example :
Alright your address is USER_ADDRESS
then next
Do you want ask address/directions again?
Intents do not reflect what your webhook says, it reflects what the user says. Intents are what the user intends to do - what they want.
So no, you can't just trigger another Intent this way. There are a few ways to do what you're asking, however.
Using the Confirmation helper with the actions-on-google v1 node.js library
If you really want to use the Confirmation helper, you need to send back JSON, since the node.js v1 library doesn't support sending this helper directly. Your response will need to look something like:
{
"data": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.CONFIRMATION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.ConfirmationValueSpec",
"dialogSpec": {
"requestConfirmationText": "Please confirm your order."
}
}
}
}
}
}
If you're not already doing JSON in your responses, then you probably don't want to go this route.
Using the Confirmation helper with the actions-on-google v2 node.js library
If you've already switched to v2 of this library, then you have the ability to send a Confirmation with something like this
app.intent('ask_for_confirmation_detail', (conv) => {
conv.ask("Here is some information.");
conv.ask(new Confirmation('Can you confirm?'));
});
Or you can just use the Dialogflow way of doing this
In this scenario - you don't use the Confirmation helper at all, since it is pretty messy.
Instead, you include your question as part of the response, and you add two Followup Intents - one to handle what to do if they say "yes", and one if they say "no".

How to solve MalformedResponse 'final_response' must be set. error in action simulator

Hi, When I try to test my Test app, it gets stopped and display My test app isn't responding right now. Try again soon. When I check validation error tab I notice I got this error MalformedResponse
'final_response' must be set.
here is the Debug info:
<code>
{
"audioResponse": "//NExAAQMQ...",
"conversationToken": "GidzaW11bG...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": "{}"
},
"assistantToAgentDebug": {
"assistantToAgentJson": "{\"user\":{\"userId\":\"ABwppHG7Kyq6lQuC4UQhVkNFxGJ3HlCPVLe03G5Jo9UUsXcg41z8LL0ppX3pIv36nDLcvJD8YNxQexCrqoywTg\",\"locale\":\"en-US\",\"lastSeen\":\"2018-02-09T08:05:38Z\",\"userStorage\":\"{\\\"data\\\":{}}\"},\"conversation\":{\"conversationId\":\"1518164534381\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to my test app\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}",
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=e4092e2db85b4744be7d736861988a51' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6ImJhNGRlZDdmNWE5MjQyOWYyMzM1NjFhMzZmZjYxM2VkMzg3NjJjM2QifQ.eyJhdWQiOiJyZXN0YXVyYW50LTRhYzMzIiwiYXpwIjoiMzk3NjQzMDYwNTkyLWlydW9ubHFzZ2cyZm81cnM1OXIwcGpkYTBxMjVsZjZsLmFwcHMuZ29vZ2xldXNlcmNvbnRlbnQuY29tIiwiZXhwIjoxNTE4MTY0NjU0LCJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJqdGkiOiI0NzVhMDU5OTllMzc4ODA0MmE5YTlhYjFkZmQ0YWU0MzA2Y2MzNTA3IiwiaWF0IjoxNTE4MTY0NTM0LCJuYmYiOjE1MTgxNjQyMzR9.GZ3NrlfYPAx5egtOYDktY9W-6P776_eLsth7tvyK-q7vytHdbMOcL4Pkq27g4pcWL8VRJkPv_3VL-QA2uAPaVm1m0F2H3qfYHqQtZmBgxgICSiwKCpyUnV1KkQWlD5O6MRW1VVZFXMqk2n2_w1U_8MCXH3z1nIB_G9MHLUD3mTomvM1W_SoyIx6xhvDJKVHN42pu28Ahj_BJEilazK6q91OhtY3hbcGjB5xAYnVP6Soh_N4qSvlrPV3J5-L8pKu0sArlspukGLKb_ijNKZiEgxsire2WCs85-5GbB-mKPXGnOuPY7mE168b2Xw37us-5V0sZ1y7Qtod7nH85A1kHaA' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"ABwppHG7Kyq6lQuC4UQhVkNFxGJ3HlCPVLe03G5Jo9UUsXcg41z8LL0ppX3pIv36nDLcvJD8YNxQexCrqoywTg\",\"locale\":\"en-US\",\"lastSeen\":\"2018-02-09T08:05:38Z\",\"userStorage\":\"{\\\"data\\\":{}}\"},\"conversation\":{\"conversationId\":\"1518164534381\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to my test app\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "'final_response' must be set.",
"name": "MalformedResponse"
}
]
}
]
},
"response": "My test app isn't responding right now. Try again soon.",
"visualResponse": {
"visualElements": []
}
}
</code>
There could be a silly mistake like I did.
In Your index.js file, Check out this similar line:
app.intent('favourite color', (conv, {color}) => {
const luckyNumber = color.length;
In this, be sure that your intent name is exactly same as your intent that you created while creating a new intent.
NOTE: Intent name is CASE-SENSITIVE. So make sure you type the same name exactly in this line of .js file.
I had same issue, resolved following way
go to logs of firebase function in my case url was
https://console.firebase.google.com/project/project-1/functions/logs?search=&severity=DEBUG
Refresh action simulator window and Check error log while executing
In my case error was
I have added Intent handler in firebase function as below
app.intent('Default Welcome Intent', conv => {
conv.ask('Hi, I am in welcom intent.')
})
Refresh simulator window, and I got the proper response.
Similarly you can also find the cause of the error and resolve it.
Is this intent webhook enabled ?
if yes, did you catch this intent in your code
if no, did you add a response to your intent
That is usually the first 2 things to check
What I find is that you need something in Speech. In the Response I find
'Failed to parse Dialogflow response into AppResponse because of empty
speech response'
I find Media Object works but not list or carousel as they don't have a Speech option. I am also trying to solve this problem.
If there is a way for List of Carousel to return a Speech response that might solve it
If anyone else have this issue try turning on "Enable webhook call for this intent" in Fulfillment under Intents .
I had similar issue and changing version(at Agent) resolved the issue:
consider using legacy Apis
Also you may need to provide a default response in the intent :
Check if you have multiple languages set in Dialogflow, if there is more than 1 language, and you have not added translations, it will come up with that message.

Resources