I want to access the confidence level from luis with the middleware so I can route low confidence level responses to humans instead of the bot.
The value I am looking for is this one (gets logged with emulator):
Library("*")recognize() recognized: Hallo(0.8215488)
Is this even possible in the middleware or does that happen afterwards?
I tried finding it in the "session" but didn't find it yet.
When using an IntentDialog from the botbuilder library, you could specify the intentThreshold property which will set the minimum score needed to trigger the recognition of an intent. Check following link for reference: https://docs.botframework.com/en-us/node/builder/chat-reference/interfaces/_botbuilder_d_.iintentdialogoptions.html#intentthreshold
If the user's input is not recognised by your LUIS models or the score value is below that intentThreshold value, the onDefault method from the IntentDialog will handle it. So, it's in here where you could add your logic to hand over the customer conversation from a bot to a human:
let recognizer = new builder.LuisRecognizer(models);
let minimumScore = 0.3;
let intentArgs = {};
intentArgs.recognizers = [recognizer];
intentArgs.intentThreshold = minimumScore;
var intents = new builder.IntentDialog(intentArgs)
.onBegin()
.onDefault(
// Add logic to handle conversation to human
);
library.dialog('options', intents);
Related
I am creating a simple amazon alexa game.
I have all the intents I need for the game, but my question is how do I make sure you can only choose valid options?
Example: If alexa ask me a yes or no answer how to a promt for this?
Currently: If you answer with 10 example, it will say "You just triggered NumberIntent", I want it to say "Please choose a valid option" and then repromt until it gets a yes or no.
I am currently using the canhandle on the intents but it doesn't help for the program crashing.
Would be nice if anyone can help me!
You shouldn't try to force the user to respond to a specific question.
Vocal design is different than visual design. On a website, I can click wherever I want.
Alexa's interaction and vocal design in general is like a deck of card. Each card is an intent.
And as a user, I can choose to pick any card whenever I want.
As a user, I can say :
repeat the question
ask for help
launch another specific intent
or maybe I just want to quit the skill.
The card picked won't be the one you expect. It will go to either an intent you've defined or AMAZON.FallbackIntent
So be careful with that.
Maybe If I say something different, it's because I want to do something different.
If you still want to implement it on specific intent, you need to adapt the logic based on the user's progression.
After you ask the user a question, you save the progression in the sessionAttribute
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
sessionAttributes.progression = "EXPECT_YES_OR_NO";
Within your intent, you add a logic to detect if the user should first response to yes or no
canHandle(handlerInput) {
return (
Alexa.getRequestType(handlerInput.requestEnvelope) === "IntentRequest" &&
Alexa.getIntentName(handlerInput.requestEnvelope) ===
"MySpecificIntent"
);
},
handle(handlerInput) {
const { responseBuilder } = handlerInput;
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
if(sessionAttributes.progression === "EXPECT_YES_OR_NO") {
const utt = "I didn't catch that, do you like StackOverflow? You can say yes or no."
return responseBuilder.speak(utt).reprompt(utt).getResponse();
}
},
I recommend you to follow alexa skill sample tutorial zero to hero it summarise everything you need to know about developing a skill on Alexa with examples & videos.
I am trying to develop an Alexa Skill with ASK node.js SDK. I am trying to build a game where the Alexa and the user take turns counting (not a great game, but useful educationally for me). Alexa starts with one, then the user two, then Alexa says three, and so on until the user says an incorrect number. In this case, I hope to implement logic to end the game.I am struggling to figure out how to get Alexa to respond differently after each time the user says a number. Is this a situation where I need multiple intent handlers? It seems like that would be silly, as the general logic does not change. I'm struggling to find up to date example code of game logic generally, so any resources that I can learn from would be greatly appreciated. The code I have as of yet is as follows--
const MyGameIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'MyGameIntent';
},
handle(handlerInput) {
const speechText = 'One';
return handlerInput.responseBuilder
.speak(speechText).listen()
.getResponse();
}
};
Obviously, I have not gotten very far. I have successfully created an intent and tested that Alexa will respond with 'One' when I ask to start a game. Where I am stuck is how to get Alexa to say 'One', then wait for a user to say 'Two', and depending on if they said the correct number, Alexa would say 'Three' or 'Game over' and end the game. The Codecademy course for ASK uses a different and outdated syntax, but it is the closest I have come yet to an answer. It suggests to chain a .listen() after speak, but does not provide information about whether this .listen() will re-prompt the same intent handler
To make it work as you wish, you need to keep the state of the game in Session Attributes between utterances. Please read it to better understand how it works.
If about your game, I would suggest to follow those steps:
You speak "start the game", the skill responses with "One" (you have already implemented this part) AND stores the game state (ie. by saving next expected answer)
When it's your turn, the skill should check if received answer is equal to expected and react - continue and store next expected answer or finish the game. For this step you'll need another handler for intent which expects just a number.
There is an example from Alexa team that shows how to create a game and store the state between utterances - Trivia Game.
I am working on a chatbot with Botman. I want to integrate Dialogflow's NLP so I'm calling the middleware and one of it's actions. The problem is that Botman is not hearing it. I just keep getting this error:
This is my intent's action name
This is the way I'm calling the middleware
I'm using my Client access token. I tried calling the action different names like 'input.automovil', 'automovil', (.*), but it's still failing and I haven't found enough examples.
The documentation is not updated. ApiAi is renamed as Dialogflow
Replace
use BotMan\BotMan\Middleware\ApiAi; with
use BotMan\BotMan\Middleware\Dialogflow;
and
$dialogflow = ApiAi::create('your-key')->listenForAction(); with $dialogflow = Dialogflow::create('your-key')->listenForAction();
try changing your lines 27 to 33 the the below
$botman->hears('automovil', function (BotMan $bot) {
// The incoming message matched the "my_api_action" on Dialogflow
// Retrieve Dialogflow information:
$extras = $bot->getMessage()->getExtras();
$apiReply = $extras['apiReply'];
$apiAction = $extras['apiAction'];
$apiIntent = $extras['apiIntent'];
$bot->reply($apiReply);
})->middleware($dialogflow);
I have been exploring Dialogflow from last 6-7 days and have created a bot which has menu in the form of List.
After going through lot of articles got to know that we need to have event actions_intent_OPTION in one of the intents for List to work properly. Also got to know that on top of it we need to have handler/intent for actions_intent_OPTION. This intent would be triggered once user taps one of the option of List.
Now i am struggling in defining handler for event actions_intent_OPTION. I had defined intent with name "actions_intent_OPTION-handler" but I am not able to find the code which i can code in for fulfillment section of Dialogflow, which will identify the option selected by user and will call the intent associated to that option.
I am a not from coding background, and I tried one code (index.js), but when deployed doesn't given any error however when executed on simulator it throws error "Failed to parse Dialogflow response into AppResponse because of empty speech response."
Reiterating my requirement, I am looking for a sample code which can capture the option selected by user (from list) and trigger the already defined intent.
Details about bot,list and intents is attached herewith.
This is the list defined by me and currently iam trying to code to capture the Payment Due Date option (which has text Payment Due Date Electricity defined in list
Code in fulfillment section
Intents defined
Note - Intent which needs to be called is "1.1 - ElectricityDetails - DueDate"
Here is the code -> Please don't ask me why i have used certain peice of code, as iam newbie :).
'use strict';
const functions = require('firebase-functions');
const {WebhookClient} = require('dialogflow-fulfillment');
const {dialogflow} = require('actions-on-google');
const app = dialogflow({debug: true});
//const agent = new WebhookClient({ request, response });
let intentMap = new Map();
app.intent('actions_intent_OPTION-handler', (conv, params, option) => {
if (!option) {
conv.ask('You did not select any item from the list or carousel');
} else if (option === 'Payment Due Date Electricity') {
//conv.ask('You are great');
//intentMap.set('Default Welcome Intent', welcome);
intentMap.set('1.1 - ElectricityDetails - DueDate',option);
} else {
conv.ask('You selected ' + option);
}
});
//agent.handleRequest(intentMap);
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
You have a few issues here, so it is difficult to tell exactly which one is the real problem, but they all boil down to this statement in your question:
Please don't ask me why i have used certain peice of code
We understand that you're new - we've all been there! But copying code without understanding what it is supposed to do is a risky path.
That said, there are a few things about your design and code that jump out at me as issues:
Mixing libraries
You seem to be loading both the actions-on-google library and the dialogflow-fulfillment library. While most of what you're doing is with the actions-on-google library, the intentMap is what is used by the d-f library.
You can't mix the two. Pick one and understand how to register handlers and how those handlers are chosen.
Register handlers with actions-on-google
If you're using the a-o-g library, you'll typically create the app object with something like
const app = dialogflow();
and then register each handler with something like
app.intent( 'intent name', conv => {
// handler code here
});
You'll register the app to handle the request and response with something like
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
Register handler with dialogflow-fulfillment
The dialogflow-fulfillment approach is similar, but it suggests creating a Map that maps from Intent Name to handler function. Something like this:
let intentMap = new Map();
intentMap.set( 'intent name', handlerFunction );
Where handlerFunction is also the name of a function you want to use as the handler. It might look something like
function handlerFunction( agent ){
// Handler stuff here
}
You can then create an agent, set the request and response objects it should use, and tell it to use the map to figure out which Intent Handler to call with something like
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
agent.handleRequest( intentMap );
Intents represent what the user does, not what you do with it
Remember that Intents represent a user's action.
What you do based on that action depends on a lot of things. In your case, once they have selected an option, you want to reply the same way as if they had triggered it with a particular Intent. Right?
You don't do that by trying to trigger that Intent.
What you do is you have both Handlers call a function that does what you want. There is nothing fancy about this - both are just calling the same function, just like lots of other code that can call common functions.
Don't try to dynamically register handlers
Related to the previous issue, trying to register a new Handler inside an existing Handler won't do what you want. By that time, it is too late, and the handlers are already called.
There may be situations where this makes sense - but they are very few, far between, and a very very advanced concept. In general, register all your handlers in a central place as I outlined above.
For demonstrations purposes I should devolop an Alexa Skill on a dialogue basis.
All the alexa responses are hardcoded.
The template of the skill is like:
Part 1:
User: Alexa, ask MySkill {Question1}.
Alexa: Hardcoded answer.
Part 2:
User: Alexa, ask MySkill {Question2.1}
Alexa: Hardcoded answer for Question2.1.
User: Alexa, ask MySkill {Qeustion2.2}
Alexa: Hardcoded answer.
I was able to create part 1. But in Part 2 i have some problems.
Do I need seperate Intents for questions 2.1 and 2.2. Or is there a possiblity to keep the skill alive?
I'm going to first assume that you're using the alexa-sdk during your development. If you don't know that that is, please check out this link:
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs
There are multiple ways you can break up questions to your skill in your intent schema. They can either be individual intents, such as "QuestionOneIntent" and "QuestionTwoIntent", or a single intent "QuestionIntent" where the slot values in those intents correspond to individual questions. As the original post hasn't given much information, I can't say which structure would be the best setup.
There are two general types of responses in the alexa-sdk. ":tell" will make Alexa say a response and immediately go back to her idle state (not listening to you). ":ask" will say a response, wait 8 seconds, and follow up with a reprompt message all while waiting for you to give another command.
As for keeping the session alive in a conversation, you could simply emit your response by using
var speechOutput = "This is the answer to Question"
var speechOutputReprompt = "Do you have any more questions?"
this.emit(":ask", speechOutput, speechOutputReprompt)
This will allow for your session to stay open and the user can continue to ask more questions. You will have to make another intent that will close the session if you answer "No" to the reprompt, thus making the shouldEndSession variable true. Here is an example of how I might structure the code:
"QuestionIntent": function(){
var responseName = ""
var slots = this.event.request.intent.slots
for (var slot in slots){
if(slots[slot].value != undefined){
responseName = slots[slot].name;
switch(responseName){
case "QuestionOneIntent":
var QuestionOneAnswer = "Answer to question one";
this.emit(":tell", QuestionOneAnswer);
break;
case "QuestionTwoIntent":
var QuestionTwoAnswer = "Answer to question two";
this.emit(":ask", QuestionTwoAnswer, QuestionTwoAnswerReprompt);
break;
default:
console.log("error");
break;
}
}
}
}
Looks like you are using single-turn interactions (i.e. you don't keep the session open, check shouldEndSession https://www.npmjs.com/package/alexa-app). Regardless, you need to either save the current state in the session object or store it somewhere else (tied to the unique request.userId).
Using different intents may be another solution but its prone to fail if you have similar utterances that may be incorrectly mis-mapped to one another.