Alexa lamda get underlying resolved slot value in Node.JS - node.js

I have an intent called "Get Status" this has a SHOULDCHANGESTATUS slot which is of type YES_NO
I have the following logic in my Lambda which worked fine using the text Test facility in the Alexa Dev tools.
let changeStatusSlot = handlerInput.requestEnvelope.request.intent.slots.SHOULDCHANGESTATUS.value;
if(changeStatusSlot === 'no'){
return statusFunctions.closureMessage(handlerInput);
}
When I'm testing this using the actual device the word "No" is coming through as "Naw".
The Yes_No slot has "naw" as an acceptable type for "No", so I should be able to handle this.
I need to change the selector on the SHOULDCHANGESTATUS slot to get me the underlying value for this slot, which should be NO but I cant get it to work.
I have tried:
handlerInput.requestEnvelope.request.intent.slots.SHOULDCHANGESTATUS.Resolution.Authorities[0].Values[0].Value.Name
but I get an undefined error.

I was close, the solution was:
handlerInput.requestEnvelope.request.intent.slots['SHOULDCHANGESTATUS'].resolutions.resolutionsPerAuthority[0].values[0].value.name;
This object structure helped me find what i needed:

Related

How to forward messages to Sentry with a clean scope (no runtime information)

I'm forwarding alert messages from a AWS Lambda function to Sentry using the sentry_sdk in Python.
The problem is that even if I use scope.clear() before capture_message() the events I receive in sentry are enriched with information about the runtime environment where the message is captured in (the AWS lambda python environment) - which in this scenario is completly unrelated to the actual alert I'm forwarding.
My Code:
sentry_sdk.init(dsn, environment="name-of-stage")
with sentry_sdk.push_scope() as scope:
# Unfortunately this does not get rid of lambda specific context information.
scope.clear()
# here I set relevant information which works just fine.
scope.set_tag("priority", "high")
result = sentry_sdk.capture_message("mymessage")
The behaviour does not change if I pass scope as an argument to capture_message().
The tag I set manually is beeing transmitted just fine. But I also receive information about the Python runtime - therefore scope.clear() either does not behave like I expect it to OR capture_message gathers additional information itself.
Can someone explain how to only capture the information I'm actively assigning to the scope with set_tag and similar functions and surpress everything else?
Thank you very much
While I didn't find an explaination for the behaviour I was able to solve my problem (Even though it' a little bit hacky).
The solution was to use the sentry before_send hook in the init step like so:
sentry_sdk.init(dsn, environment="test", before_send=cleanup_event)
with sentry_sdk.push_scope() as scope:
sentry_sdk.capture_message(message, state, scope)
# when using sentry from lambda don't forget to flush otherwise messages can get lost.
sentry_sdk.flush()
Then in the cleanup_event function it gets a little bit ugly. I basically iterate over the keys of the event and remove the ones I do not want to show up. Since some Keys hold objects and some (like "tags") are a list with [key, value] entries this was quite some hassle.
KEYS_TO_REMOVE = {
"platform": [],
"modules": [],
"extra": ["sys.argv"],
"contexts": ["runtime"],
}
TAGS_TO_REMOVE = ["runtime", "runtime.name"]
def cleanup_event(event, hint):
for (k, v) in KEYS_TO_REMOVE.items():
with suppress(KeyError):
if v:
for i in v:
del event[k][i]
else:
del event[k]
for t in event["tags"]:
if t[0] in TAGS_TO_REMOVE:
event["tags"].remove(t)
return event

Mocking LUIS response with LuisRecognizer not working

I am trying to mock calls to LUIS via nock, which uses the LuisRecognizer from botbuilder-ai. Here is the relevant information.
The bot itself is calling LUIS and getting the result via const recognizerResult = await this.dispatchRecognizer.recognize(context);. I grabbed the actual result as below:
{"text":"I want to look up my order","intents":{"viewOrder":{"score":0.996454835},"srStatus":{"score":0.0172454268},"expediteOrder":{"score":0.0108480565},"escalate":{"score":0.007967358},"qna":{"score":0.00694736559},"Utilities_Cancel":{"score":0.005627355},"manageProfile":{"score":0.004953466},"getPricing":{"score":0.001781322},"Utilities_Help":{"score":0.0007197641},"getAvailability":{"score":0.0005667514},"None":{"score":0.000321137835}},"entities":{"$instance":{}},"sentiment":{"label":"negative","score":0.171873689},"luisResult":{"query":"I want to look up my order","topScoringIntent":{"intent":"viewOrder","score":0.996454835},"intents":[{"intent":"viewOrder","score":0.996454835},{"intent":"srStatus","score":0.0172454268},{"intent":"expediteOrder","score":0.0108480565},{"intent":"escalate","score":0.007967358},{"intent":"qna","score":0.00694736559},{"intent":"Utilities.Cancel","score":0.005627355},{"intent":"manageProfile","score":0.004953466},{"intent":"getPricing","score":0.001781322},{"intent":"Utilities.Help","score":0.0007197641},{"intent":"getAvailability","score":0.0005667514},{"intent":"None","score":0.000321137835}],"entities":[],"sentimentAnalysis":{"label":"negative","score":0.171873689}}}
For the sake of brevity, I'll just call this "recognizerResult" in the below. I'm successfully intercepting the API call in my test file with nock, with the configuration as follows:
nock('https://westus.api.cognitive.microsoft.com')
.post(/.*/)
.reply(200,{recognizerResult});
I've tried returning both as a JSON object and a string, though I'm almost certain this needs to be JSON object as shown (I'm mocking a call to QnA maker with the same approach that is working). When I run this test via mocha, I get the following error:
TypeError: Cannot read property 'replace' of undefined
at LuisRecognizerV2.normalizeName (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:96:21)
at luisResult.intents.reduce (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:104:31)
at Array.reduce (<anonymous>)
at LuisRecognizerV2.getIntents (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:102:32)
at LuisRecognizerV2.<anonymous> (node_modules\botbuilder-ai\src\luisRecognizerOptionsV2.ts:81:27)
at Generator.next (<anonymous>)
at fulfilled (node_modules\botbuilder-ai\lib\luisRecognizerOptionsV2.js:11:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
I've looked at the code in question within the luisRecognizerOptionsV2.ts file, but can't see where there's an issue. The replace is part of normalizing the intent name, which is there to replace unsupported characters with an "_". The bot runs normally when deployed to Azure (and locally), and the tests work without mocking the call. However, I really want to be able to test this without making actual LUIS calls. Any ideas why I am getting this error and how to fix?
For reference, here is the mock to QnA Maker that is working, though note that I'm using a simple REST call for that instead of the recognizer.
nock('https://myqnaservicename.azurewebsites.net')
.post(/.*/)
.reply(200, {"answers": [{"questions": ["I need an unrecognized utterance for testing"], "answer": "I can hear you now!", "score": 28.48, "id": 1234}]});
The issue is that your {recognizerResult} is what gets saved to const recognizerResult, but is not what gets returned by that API call.
It takes a lot of digging to find it all, but a V2 LUIS client gets the API response, then converts it into recognizerResult.
You've got a few options for "fixing" this:
Set a breakpoint in that node_modules\botbuilder-ai\src\luisRecognizerOptionsV2 file on that const result = line and grab luisResult.
Use something like Fiddler to record the actual API response and use that
Write it manually
For reference, you can see how we do this in our tests:
nock()
Recorded response
You can see that our nock() returns response.v2, which does not contain .topScoringIntent, which is what it's looking for, which is why the error is throwing.
Specifically, the mock response needs to be just the v2/luisResults attributes. In other words, when using the luisRecognizer, the response set in nock needs to be
.reply(200,{ "query": "Sample query", "topScoringIntent": { "intent": "desiredIntent", "score":1}, "entities":[]});
If you look at the test data linked above, there are other attributes in the actual response. But this is the minimum required response if you are just trying to get topIntent to test routing. If you needed other attributes you could add them, e.g. you could add everything within v2 as in this file or some of the more involved files with things like multiple intents.

DialogFlow follow up triggers empty response

I have a DialogFlow intent follow up that I'm having a hard time with. It's the only follow up to my main intent, and the issue I'm having is that when
the incidents.data array is empty it doesn't trigger the conv.ask statement in the else case and causes DialogFlow to throw an empty speech response error. The code looks something like this:
app.intent('metro_timetable - yes', async (conv: any) => {
const incidents = await serviceIncidents.getIncidents();
if (incidents.data.length > 0) {
conv.ask('I have incidents')
} else {
conv.ask(
`I wasn't able to understand your request, could you please say that again?`
);
}
});
incidents.data gets stored in the global scope, and is set deep within
the metro_timetable intent. It stores an incident for the follow up. Because all yes responses trigger the follow up I setup an else case so it catches it if someone says yes when metro_timetable doesn't understand their original request and asks them to repeat it. If incidents.data actually has information to share the dialog triggers correctly and I have incidents is correctly read to the user.
In DialogFlow it looks something like this. Where am I going wrong here?
Your description is a little convoluted how incidents.data actually gets set, but it sounds possible that instead of it being set to an empty array, it isn't set at all. In this case, I suspect that the following happened:
incidents.data would be undefined
Trying to evaluate incidents.data.length would cause an error
Since the program crashes, your webhook doesn't return a result. Since you probably didn't set a result in the UI for the intent, an empty result was returned.
You can probably solve this by doing a test such as (for example)
incidents && incidents.data && incidents.data.length > 0
Your other issue, however, seems to be that you have a Followup Intent set for a scenario where you don't actually want that as the followup. This is one of the reasons you probably shouldn't use Followup Intents but, instead, only set a context when you send a response where that context would make sense, and look for the "Yes" response in the context you define. Then, when metro_timetable doesn't understand the request, you don't set the context and you give an error.
To do this, you would remove the automatically generated metro_timetable-followup context from the two Intents. You'll create your own context, which I'll name timetable for purposes of this example.
In the fulfillment for the metro_timetable Intent, if you respond with something that needs confirmation (ie - when "yes" will be something the user says), you would set the timetable context with something like
conv.contexts.set('timetable',2);
conv.ask('Are you sure?');
You can then create an Intent that checks for timetable as the Incoming Context and has training phrases that are equivalent to "yes". In that Intent, you'd do what you need to and respond.

Actions SDK conv.hasScreen not working as expected

I am building an action using DialogFlow and Firebase cloud functions. I have a simple check to either ask a question or close the conversation depending on user's device type.
if (conv.hasScreen) {
response += `Do you want to see a picture?`;
conv.ask(response);
return;
}
else{
conv.close(response);
return;
}
I tested using Google Home mini, as expected, the conversation gracefully closed. But when I tested on a phone, the if check failed and the conversation was closed again. I was expecting the contestation to continue and assistant would ask me to show a picture but it did not happen. What am I doing wrong?
It looks like the syntax is simply conv.screen. As the property hasScreen does not exist, the conditional always returns undefined, which is a falsey value.
Take a look at the following to understand Surface Capabilities.
Are you using the following statement or not?
const hasScreen =
conv.surface.capabilities.has('actions.capability.SCREEN_OUTPUT');

What could cause L20N to not process the Entity variables and inclusions?

L20N is setup in my ReactJS project and I am calling getSync on the context after its ready event has fired (so things should be good to go). However, rather than my expected string including other Entity values and variable expansion, I get the raw Entity string.
The string I get looks like this:
{{$user.name}} - {{appName}}
But of course, I'm expecting something like this:
Ben Taylor - My Cool App
I have tried to recreate the problem in this plunker. Unfortunately, it works fine! When you run it, the alert box shows the expected L20N expanded string.
What could cause the Entity value to be returned raw? I have a valid context and there are no errors in inspector, so it appears all is configured fine. I'm wondering if there is some interaction with something else I'm doing that is breaking L20N. Any ideas appreciated!
I am unable to include the app I'm working on, but needless to say it has more moving parts. It is a React app based on this template.
If there is some sort of error in your .l20n file (the extension formerly known as .lol) then the getSync call will return the raw string value. In my case the error was to quote the keys in an L20n dictionary.
If you have context data like { user: { type: "Awesome" } } then the following does not work and calling getSync for useTheShout will return the unprocessed string value (including the text {{shout}}):
<shout[$user.type] {
"Awesome": "HEY AWESOME USER!",
"Loser": "i can't be bothered to shout at you loser..."
}>
<useTheShout "I'm gonna shout the following: {{shout}}">
Removing the quote marks from the dictionary key names will make this work, as follows:
<shout[$user.type] {
Awesome: "HEY AWESOME USER!",
Loser: "i can't be bothered to shout at you loser..."
}>
<useTheShout "I'm gonna shout the following: {{shout}}">
Update: You can avoid the pain by logging using the error and warning event emitters.

Resources