How can i check my input after each iteration on Luis? - node.js

if (!meeting.location) {
builder.Prompts.text(session, 'where is the location');
} else {
next();
}
This is a part of my node.js code where my bot tries to identify the location in the first message , if he doesn't find it he takes as location any value the user gives, might be as random as "83748yhgsdh".
My question is how can i allow my system to check the user input at each step and only take reasonable ones.

You can probably manually call the LUIS recognizer
e.g:
var recognizer = new builder.LuisRecognizer(model);
recognizer.recognize(
"hello",
model,
function (err, intents, entities) {
console.log(intents);
}
)

Related

What is the best practice to avoid utterance conflicts in an Alexa Skill

In the screenshot below, I have got an utterance conflict, which is obvious because I am using similar patterns of samples in both the utterances.
My question is, the skill I am developing requires similar kind of patterns in multiple utterances and I cannot force users to say something like “Yes I want to continue”, or “I want to store…”, something like this.
In such a scenario what is the best practice to avoid utterance conflicts and that too having the multiple similar patterns?
I can use a single utterance and based on what a user says, I can decide what to do.
Here is an example of what I have in my mind:
User says something against {note}
In the skill I check this:
if(this$inputs.note.value === "no") {
// auto route to stop intent
} else if(this$inputs.note.value === "yes") {
// stays inside the same intent
} else {
// does the database stuff and saves the value.
// then asks the user whether he wants to continue
}
The above loop continues until the user says “no”.
But is this the right way to do it? If not, what is the best practice?
Please suggest.
The issue is really that for those two intents you have slots with no context around them. I'm also assuming you're using these slots as catch-all slots meaning you want to capture everything the person says.
From experience: this is very difficult/annoying to implement and will not result in a good user experience.
For the HaveMoreNotesIntent what you want to do is have a separate YesIntent and NoIntent and then route the user to the correct function/intent based on the intent history (aka context). You'll have to just enable this in your config file.
YesIntent() {
console.log(this.$user.$context.prev[0].request.intent);
// Check if last intent was either of the following
if (
['TutorialState.TutorialStartIntent', 'TutorialLearnIntent'].includes(
this.$user.$context.prev[0].request.intent
)
) {
return this.toStateIntent('TutorialState', 'TutorialTrainIntent');
} else {
return this.toStateIntent('TutorialState', 'TutorialLearnIntent');
}
}
OR if you are inside a state you can have yes and no intents inside that state that will only work in that state.
ISPBuyState: {
async _buySpecificPack() {
console.log('_buySpecificPack');
this.$speech.addText(
'Right now I have a "sports expansion pack". Would you like to hear more about it?'
);
return this.ask(this.$speech);
},
async YesIntent() {
console.log('ISPBuyState.YesIntent');
this.$session.$data.productReferenceName = 'sports';
return this.toStatelessIntent('buy_intent');
},
async NoIntent() {
console.log('ISPBuyState.NoIntent');
return this.toStatelessIntent('LAUNCH');
},
async CancelIntent() {
console.log('ISPBuyState.CancelIntent()');
return this.toStatelessIntent('LAUNCH');
}
}
I hope this helps!

Microsoft Bot Framework LUIS in waterfall conversation

I have an existing waterfall conversation. I want to adapt it so that it can extract data from more complex user responses to the bot's questions.
In my LUIS app I have created an intent called GetLocation which is trained to find an entity called Location. An example of this is the user typing "I am looking in Bristol" which would match the entity "Bristol". This is what I currently have:
function(session) {
builder.Prompts.text(session, "Hello... Which city are you looking in?");
},
function(session, results) {
session.privateConversationData.city = results.response;
builder.Prompts.number(session, "Ok, you are looking in " + results.response + ", How many bedrooms are you looking for?");
},
etc...
Instead of simply storing the response string, I want to send the response string off to LUIS and extract the city location from it. All of the LUIS examples I've found are for matching and going to new Intents however I simply want to keep the waterfall conversation going. How would I utilise LUIS to do this?
I think you can do this by having two different dialogs setup:
Dialog 1:
This is the dialog you have above, your normal Waterfall dialog that drives the conversation.
Dialog 2:
This dialog will be created with a LUIS Intent recognizer using your LUIS model. Dialog 1 will issue the prompt, then pass the user to this dialog and parse the text entered by the user. Since your Model is already trained to recognize location, all you need to do now is extract the entity.
After dialog 2 has parsed the location information using LUIS, and extracted the entity, you will end the dialog and return the entity (location) back to dialog 1, which will still be on the Dialog Stack.
Code
//create intent recognizer based on LUIS model
var luisModel = "<Your LUIS Model URL>";
var recognizer = new botbuilder.LuisRecognizer(luisModel);
//create dialog handler for info to be parsed by LUIS
var dialog = new botbuilder.IntentDialog({ recognizers: [recognizer] });
//root dialog
bot.dialog("/", [
function(session){
//prompt user and pop LUIS intent dialog onto dialog stack
session.send("Hello, which city are you looking in?");
session.beginDialog("/begin_loc_parse");
},
//this will be resumed after our location has been extracted
function(session, results){
//check for extracted location
if(results.entity){
//got location successfully
session.send("Got city from user: " + results.entity);
//resume normal waterfall with location.....
} else {
//start over
session.beginDialog("/");
}
}
]);
//LUIS intent dialog
dialog.matches("input_location", function(session, args){
//grab location entity
var city = botbuilder.EntityRecognizer.findEntity(args.entities, "builtin.geography.city");
if(city){
//pop the LUIS dialog off of the dialog stack
//and return the extracted location back to waterfall
session.endDialogWithResult(city);
} else session.endDialog("Couldn't extract city entity.");
});
//called if user doesn't enter something like "I am looking in [city]"
dialog.onDefault(function(session, args){
session.send("I'm sorry, I didn't quite catch that. In which city are you looking?");
});
So basically, in the root dialog, when you prompt the user for the location, and then call session.beginDialog("/begin_loc_parse") you will have passed the conversation to your LUIS intent dialog.
Any text entered by the user after this point will be interpreted by your LUIS model. This allows you to use your model to recognize and extract the location information from the user.
Then, the key is to use session.endDialogWithResult()to pop the LUIS dialog off the stack, and to go back to your original waterfall with your newly extracted location.

Botframework responses with intent with lower score

I have a bot which uses two LUIS apps as a LuisRecognizers to guess the client intent. My question is why does the bot respond the intent which has the lowest score? I double checked this and if i manually check the score through the Luis dashbord then i received something like: IntentA with score 0.92 and IntentB with score 1. And if i pass the same input through the botframework it responses with IntentA which has lower score. Am i missing something?
I tried to play with intentThreshold, recognizeMode or recognizeOrder, all mentioned in docs, however not received better results.
If you consider the C# code of BotFramework you can see "the best intent from" function was implemented like the following:
protected virtual IntentRecommendation BestIntentFrom(LuisResult result)
{
return result.Intents.MaxBy(i => i.Score ?? 0d);
}
If you want to test this, you can override it in your LuisDialog, to see the details of its mechanism (by logging scores of intetnts).
as you can see the max score will be chosen at the decision point.
Also, you can find the Luis recognizer in NodeJs:
LuisRecognizer.recognize(utterance, model, (err, intents, entities) => {
if (!err) {
result.intents = intents;
result.entities = entities;
// Return top intent
var top: IIntent;
intents.forEach((intent) => {
if (top) {
if (intent.score > top.score) {
top = intent;
}
} else {
top = intent;
}
});
if (top) {
result.score = top.score;
result.intent = top.intent;
// Correct score for 'none' intent
// - The 'none' intent often has a score of 1.0 which
// causes issues when trying to recognize over multiple
// model. Setting to 0.1 lets the intent still be
// triggered but keeps it from trompling other models.
switch (top.intent.toLowerCase()) {
case 'builtin.intent.none':
case 'none':
result.score = 0.1;
break;
}
}
cb(null, result);
} else {
cb(err, null);
}
});
Again the same as C# code, the recognizer choosees the max score, if there exists an application model in Luis.
Therefore, this problem does not come from the client.
Hence, a suggestion can be considering the JSON response of the LUIS which is received to your client.
Have you tried from the LUIS dashboard your published model? I had the same problem because LUIS wasn't publishing my model correctly at the moment and it didn't catch the changes I made, so the trained model worked perfectly in the dashboard but the published not.
I tried the next day and it published everything correctly in both, the dashboard and the botframework.

BotFramework: Start a Form from a Dialog using Intents

Regarding the Microsoft Bot Framework, we all know the samples given by Microsoft. Those samples, however, normally have "one single purpose", that is, the Pizzabot is only for ordering Pizzas, and so on.
Thing is, I was hoping on creating a more complex Bot that actually answers a series of things. For this I am creating a "lobby" dialog where all the messages go, using this MessageController:
return await Conversation.SendAsync(message, () => new LobbyDialog());
On that "Lobby" dialog I have a series of LUIS intents for different things, and since it picks the Task based on the intent, it works nicely.
However, for more complex operations, I was hoping on using the FormFlow mechanism so I can have forms like in the PizzaBot sample. The problem is that all of the "form bots" that are sampled always use this message controller type:
return Chain.From(() => new PizzaOrderDialog(BuildForm)
And in the same MessagesController stablishes the builder flow, like this:
var builder = new FormBuilder<PizzaOrder>();
ActiveDelegate<PizzaOrder> isBYO = (pizza) => pizza.Kind == PizzaOptions.BYOPizza;
ActiveDelegate<PizzaOrder> isSignature = (pizza) => pizza.Kind == PizzaOptions.SignaturePizza;
ActiveDelegate<PizzaOrder> isGourmet = (pizza) => pizza.Kind == PizzaOptions.GourmetDelitePizza;
ActiveDelegate<PizzaOrder> isStuffed = (pizza) => pizza.Kind == PizzaOptions.StuffedPizza;
return builder
// .Field(nameof(PizzaOrder.Choice))
.Field(nameof(PizzaOrder.Size))
.Field(nameof(PizzaOrder.Kind))
.Field("BYO.Crust", isBYO)
.Field("BYO.Sauce", isBYO)
.Field("BYO.Toppings", isBYO)
.Field(nameof(PizzaOrder.GourmetDelite), isGourmet)
.Field(nameof(PizzaOrder.Signature), isSignature)
.Field(nameof(PizzaOrder.Stuffed), isStuffed)
.AddRemainingFields()
.Confirm("Would you like a {Size}, {BYO.Crust} crust, {BYO.Sauce}, {BYO.Toppings} pizza?", isBYO)
.Confirm("Would you like a {Size}, {&Signature} {Signature} pizza?", isSignature, dependencies: new string[] { "Size", "Kind", "Signature" })
.Confirm("Would you like a {Size}, {&GourmetDelite} {GourmetDelite} pizza?", isGourmet)
.Confirm("Would you like a {Size}, {&Stuffed} {Stuffed} pizza?", isStuffed)
.Build()
;
My big question here is, is it possible to start the conversation with the MessagesController that I used and then in the LobbyDialog, use an Intent that fires a Form and returns it? That is, start a flow from a dialog? Or is better to use DialogChains for that?
Because, from what I tried, it appears that I can ONLY do forms if they are called from teh MessagesController class with the methods I described, that is, how Microsoft sampled it in the Pizzabot.
I appreciate any help or input on the matter. Thanks for your time.
Sure you can! Instantiating a form from a dialog is a pretty common scenario. To accomplish that you can do the following inside the LUIS intent method:
var form = new FormDialog<YourFormModel>(
<ExistingModel>,
<TheMethodThatBuildTheForm>,
FormOptions.PromptInStart,
result.Entities);
context.Call(form, <ResumeAfterCallback>);
using the PizzaBot sample, it should looks like:
var form = new FormDialog<PizzaOrder>(
null,
BuildForm,
FormOptions.PromptInStart,
result.Entities);
context.Call(form, <ResumeAfterCallback>);
In the ResumeAfterCallback you will usually get the result of the form, catch exceptions and perform a context.Wait so the dialog can keep receiving messages. Below a quick example:
private async Task ResumeAfterCallback(IDialogContext context,
IAwaitable<PizzaOrder> result)
{
try
{
var pizzaOrder = await result;
// do something with the pizzaOrder
context.Wait(this.MessageReceived);
}
catch (FormCanceledException<PizzaOrder> e)
{
string reply;
if (e.InnerException == null)
{
reply = "You have canceled the operation. What would you like to do next?";
}
else
{
reply = $"Oops! Something went wrong :(. Technical Details: {e.InnerException.Message}";
}
await context.PostAsync(reply);
context.Wait(this.MessageReceived);
}
}

How to add additional dialog in bot framework

How can I have 2 conversations going concurrently? I'm currently using TextBot and LuisDialog to build a bot. I start off by having a conversation with the user to obtain data. Then while doing some processing in a different method, I discover that I need additional information from the user. How can I create a new conversation with the user just to get that additional information? I have some code below that attempts to show what I want to do. Thanks for your suggestions.
File 1: foo.js
var dialog = new builder.LuisDialog(model);
var sonnyBot = new builder.TextBot();
sonnyBot.add('/', dialog);
dialog.on('intent_1', [
function(session, args, next) {
name = builder.Prompts.text(session,"What is your name?");
},
function(session, result) {
session.dialogData.name= results.response;
getFamilyTree(session.dialogData.name);
}
]);
File 2: getFamilyTree.js
function getFamilyTree(name) {
find family tree for name
if (need place of birth) {
begin new dialog
prompt user place of birth
get place of birth from user
end dialog
}
finish getting the family tree
}
i guess you could pass session object and then use that object to start a new dialog .
Edit 1
can't you use some thing like
session.beginDialog('/getFamilyTree',{name:result.response});
and then you can access name like
args.name
inside 'getFamilyTree' dialog
I posted the same question on GitHub and received the answer from Steven Ickman, who is involved in the development of the node.js SDK. The link to the answer is https://github.com/Microsoft/BotBuilder/issues/394#issuecomment-223127365

Resources