I have a bot which uses two LUIS apps as a LuisRecognizers to guess the client intent. My question is why does the bot respond the intent which has the lowest score? I double checked this and if i manually check the score through the Luis dashbord then i received something like: IntentA with score 0.92 and IntentB with score 1. And if i pass the same input through the botframework it responses with IntentA which has lower score. Am i missing something?
I tried to play with intentThreshold, recognizeMode or recognizeOrder, all mentioned in docs, however not received better results.
If you consider the C# code of BotFramework you can see "the best intent from" function was implemented like the following:
protected virtual IntentRecommendation BestIntentFrom(LuisResult result)
{
return result.Intents.MaxBy(i => i.Score ?? 0d);
}
If you want to test this, you can override it in your LuisDialog, to see the details of its mechanism (by logging scores of intetnts).
as you can see the max score will be chosen at the decision point.
Also, you can find the Luis recognizer in NodeJs:
LuisRecognizer.recognize(utterance, model, (err, intents, entities) => {
if (!err) {
result.intents = intents;
result.entities = entities;
// Return top intent
var top: IIntent;
intents.forEach((intent) => {
if (top) {
if (intent.score > top.score) {
top = intent;
}
} else {
top = intent;
}
});
if (top) {
result.score = top.score;
result.intent = top.intent;
// Correct score for 'none' intent
// - The 'none' intent often has a score of 1.0 which
// causes issues when trying to recognize over multiple
// model. Setting to 0.1 lets the intent still be
// triggered but keeps it from trompling other models.
switch (top.intent.toLowerCase()) {
case 'builtin.intent.none':
case 'none':
result.score = 0.1;
break;
}
}
cb(null, result);
} else {
cb(err, null);
}
});
Again the same as C# code, the recognizer choosees the max score, if there exists an application model in Luis.
Therefore, this problem does not come from the client.
Hence, a suggestion can be considering the JSON response of the LUIS which is received to your client.
Have you tried from the LUIS dashboard your published model? I had the same problem because LUIS wasn't publishing my model correctly at the moment and it didn't catch the changes I made, so the trained model worked perfectly in the dashboard but the published not.
I tried the next day and it published everything correctly in both, the dashboard and the botframework.
Related
In the screenshot below, I have got an utterance conflict, which is obvious because I am using similar patterns of samples in both the utterances.
My question is, the skill I am developing requires similar kind of patterns in multiple utterances and I cannot force users to say something like “Yes I want to continue”, or “I want to store…”, something like this.
In such a scenario what is the best practice to avoid utterance conflicts and that too having the multiple similar patterns?
I can use a single utterance and based on what a user says, I can decide what to do.
Here is an example of what I have in my mind:
User says something against {note}
In the skill I check this:
if(this$inputs.note.value === "no") {
// auto route to stop intent
} else if(this$inputs.note.value === "yes") {
// stays inside the same intent
} else {
// does the database stuff and saves the value.
// then asks the user whether he wants to continue
}
The above loop continues until the user says “no”.
But is this the right way to do it? If not, what is the best practice?
Please suggest.
The issue is really that for those two intents you have slots with no context around them. I'm also assuming you're using these slots as catch-all slots meaning you want to capture everything the person says.
From experience: this is very difficult/annoying to implement and will not result in a good user experience.
For the HaveMoreNotesIntent what you want to do is have a separate YesIntent and NoIntent and then route the user to the correct function/intent based on the intent history (aka context). You'll have to just enable this in your config file.
YesIntent() {
console.log(this.$user.$context.prev[0].request.intent);
// Check if last intent was either of the following
if (
['TutorialState.TutorialStartIntent', 'TutorialLearnIntent'].includes(
this.$user.$context.prev[0].request.intent
)
) {
return this.toStateIntent('TutorialState', 'TutorialTrainIntent');
} else {
return this.toStateIntent('TutorialState', 'TutorialLearnIntent');
}
}
OR if you are inside a state you can have yes and no intents inside that state that will only work in that state.
ISPBuyState: {
async _buySpecificPack() {
console.log('_buySpecificPack');
this.$speech.addText(
'Right now I have a "sports expansion pack". Would you like to hear more about it?'
);
return this.ask(this.$speech);
},
async YesIntent() {
console.log('ISPBuyState.YesIntent');
this.$session.$data.productReferenceName = 'sports';
return this.toStatelessIntent('buy_intent');
},
async NoIntent() {
console.log('ISPBuyState.NoIntent');
return this.toStatelessIntent('LAUNCH');
},
async CancelIntent() {
console.log('ISPBuyState.CancelIntent()');
return this.toStatelessIntent('LAUNCH');
}
}
I hope this helps!
I have built an Alexa skill with the following flow:
LAUNCH -> AccountLinkingIntent -> CampaignIntent
In AccountLinkingIntent, presently I am routing to CampaignIntent if Account is already linked.
Up to this everything is working fine. Now I have to add another Intent ActiveContactIntent so that the flow becomes:
LAUNCH -> AccountLinkingIntent -> CampaignIntent / ActiveContactIntent
i.e, From AccountLinking I need to decide which Intent to route to.
The invocation goes like this (CampaignIntent):
Alexa, ask <invocation_name> to get my latest campaign result
OR (ActiveContactIntent)
Alexa, ask <invocation_name> who is my most active contact
Based on the utterance, I need to tell Alexa where to go. So far I have the following in AccountLinkingIntent
...
return this.toIntent("CampaignIntent");
...
But now I need to decide the same as this:
...
if( ... ) {
return this.toIntent("CampaignIntent");
} else {
return this.toIntent("ActiveContactIntent");
}
...
Is there any way to get the IntentName by the utterance so that I can check by the same such as:
...
if( intent_name_by_utterance === "CampaignIntent" ) {
return this.toIntent("CampaignIntent");
} else {
return this.toIntent("ActiveContactIntent");
}
...
Or probably, if it is possible to get intent_name_by_utterance I may also pass the value as the argument of toIntent method if it is allowed to pass a variable!
return this.toIntent(intent_name_by_utterance);
UPDATE:
I have tried the following to see whether the intent name is being returned:
LAUNCH() {
return this.toIntent("LinkAccountIntent");
},
async LinkAccountIntent() {
const intent_name = this.$request.getIntentName();
this.tell(Current intent is: ${intent_name});
},
Invoked the skill in following two fashions:
Alexa, ask <invocation-name> to give me my latest campaign results
Alexa, ask <invocation-name> who is my most active contact
give me my latest campaign results AND who is my most active contact are the utterances for respective intents.
I am using Alexa Test console for testing. In both cases, I was expecting the name of the intent (this.$request.getIntentName()), ending up with
Hmm, I don't know that one.
My intention is to call an intent by its utterance directly by waking up the skill using its invocation name.
Any suggestion?
I think you should treat both intends separately and not through the launch, because for example in the case
"Alexa, ask who is my most active contact"
Alexa skips the launch and jumps directly to resolve the intend ActiveContactIntent.
So just as you have the LAUNCH(){} function you must also have CampaignIntent(){} and ActiveContactIntent(){}.
That way you will avoid Alexa answering
Hmm, I don't know that one.
To verify that the user has already linked his account, you would have to enter a code like the one below:
if (!this.$request.getAccessToken()) {
this.$alexaSkill.showAccountLinkingCard();
this.tell('Please link you Account');
} else {
//code for your respective action
}
I recommend you to check the documentation about "routing with jovo" to have a little more clarity about this topic. You can review it at the following link:
https://www.jovo.tech/docs/routing
My apologies in advance for the long post.
I am quite new to AWS Rekognition and Lambda, but I took on a project to build a facial recognition system using AWS S3, Rekognition and Lambda. I managed to get a working solution using a few of the Rekognition API's that are provided in the AWS JavaScript SDK Documentation, but it only works when there is one face in the input image. I started playing around with images that has multiple faces, but it doesn't give the response I'm looking for. After doing research I narrowed my problem down to the following:
I need to be able to specify what faces I want to index in an image with multiple faces using the indexFaces API.
NOTE: I'm using JavaScript.
My logic for a single face in an image is that I use the SearchFacesByImage API and I first see if I have indexed the face of Person 1 in the past to 'allFaces'. If I have, then I don't need to index Person 1's face again to 'allFaces', but if I have not, then I need to do that first.
Up until this point, everything works fine when I'm using an image with a single face as input. (See code example down below)
Here comes the problem, when I have an image with multiple faces, including the face of Person 1, it will index all the faces in that image, including Person 1's face again, and add it to the 'allFaces' collection, but what I want to achieve is where the system picks up that Person 1 has been indexed in the past, so it should not index Person 1 again, instead index all the other people in the image.
That's how I came to refine my problem to be able to specify what faces I want to index in an image that contains multiple faces, because if I can achieve that, then I can say that Person 1 has been indexed, so continue with Person 2.
In the indexFaces API, you can specify the "MaxFaces" and "QualityFilter" parameters. I have looked at that, but I don't believe that holds the answer to my solution so I'm steering away from that, unless it 100% holds the answer to my solution.
I'm also not sure if there might be an issue with my logic, or if my logic is okay but my lack in JavaScript knowledge is what's holding me back.
Here is what I've done thus far for a single face in an image:
const AWS = require('aws-sdk');
const s3 = new AWS.S3({apiVersion: "2006-03-01"});
const rekognition = new AWS.Rekognition();
//-----------------------------Exports Function-----------------------
exports.handler = function(event, context) {
bucket = event.Records[0].s3.bucket.name;
key = event.Records[0].s3.object.key;
console.log(bucket);
console.log(key);
searchingFacesByImage(bucket, key);
};
//--------------------------------------------------------------------
// Search for a face in an input image
function searchingFacesByImage(bucket, key) {
let params = {
CollectionId: "allFaces",
FaceMatchThreshold: 95,
Image: {
S3Object: {
Bucket: bucket,
Name: key
}
},
MaxFaces: 5
};
const searchingFace = rekognition.searchFacesByImage(params, function(err, searchdata) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
// console.log(JSON.stringify(searchdata, null, '\t'));
// if data.FaceMatches > 0 : There that face in the image exists in the collection
if (searchdata.FaceMatches.length > 0) {
console.log("Face is a match");
// Continue
} else {
console.log("Face is not a match");
console.log("Start indexing face to 'allFaces'");
indexToAllFaces(bucket, key);
}
}
});
return searchingFace;
}
//--------------------------------------------------------------------
// If face is not a match in 'allFaces', index face to 'allFaces' collection
function indexToAllFaces(bucket, key) {
let params = {
CollectionId: "allFaces",
DetectionAttributes: ['ALL'],
Image: {
S3Object: {
Bucket: bucket,
Name: key
}
}
};
const indexFace = rekognition.indexFaces(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log("INDEXING TO 'allFaces'");
console.log(JSON.stringify(data, null, '\t'));
}
});
return indexFace;
}
//--------------------------------------------------------------------
Like I said, this works fine when using images with a single face, so that is why I'm hoping to add some logic to filter through the faces in an image with multiple faces and whoever's face has been indexed in the past, should not be indexed again.
Thanks in advance for any feedback.
You will need to use DetectFaces() to obtain a list of all faces detected in the image.
The, for each Face returned:
Use the BoundingBox to copy and crop the image, so it only shows the given face
Use SearchFacesByImage() to determine whether it is already in the Face Collection
If it is not in the Face Collection, use IndexImages() to add it to the Face Collection. It will add the single face to the collection.
From experience, it is also a good idea to associate an ExternalImageId with each image added. This can contain your own reference to the face and can be used in a database to store additional information about the Face (eg id, name, or which picture they came from). The ExternalImageId will be returned with cetain calls when the face is detected in images.
if (!meeting.location) {
builder.Prompts.text(session, 'where is the location');
} else {
next();
}
This is a part of my node.js code where my bot tries to identify the location in the first message , if he doesn't find it he takes as location any value the user gives, might be as random as "83748yhgsdh".
My question is how can i allow my system to check the user input at each step and only take reasonable ones.
You can probably manually call the LUIS recognizer
e.g:
var recognizer = new builder.LuisRecognizer(model);
recognizer.recognize(
"hello",
model,
function (err, intents, entities) {
console.log(intents);
}
)
Regarding the Microsoft Bot Framework, we all know the samples given by Microsoft. Those samples, however, normally have "one single purpose", that is, the Pizzabot is only for ordering Pizzas, and so on.
Thing is, I was hoping on creating a more complex Bot that actually answers a series of things. For this I am creating a "lobby" dialog where all the messages go, using this MessageController:
return await Conversation.SendAsync(message, () => new LobbyDialog());
On that "Lobby" dialog I have a series of LUIS intents for different things, and since it picks the Task based on the intent, it works nicely.
However, for more complex operations, I was hoping on using the FormFlow mechanism so I can have forms like in the PizzaBot sample. The problem is that all of the "form bots" that are sampled always use this message controller type:
return Chain.From(() => new PizzaOrderDialog(BuildForm)
And in the same MessagesController stablishes the builder flow, like this:
var builder = new FormBuilder<PizzaOrder>();
ActiveDelegate<PizzaOrder> isBYO = (pizza) => pizza.Kind == PizzaOptions.BYOPizza;
ActiveDelegate<PizzaOrder> isSignature = (pizza) => pizza.Kind == PizzaOptions.SignaturePizza;
ActiveDelegate<PizzaOrder> isGourmet = (pizza) => pizza.Kind == PizzaOptions.GourmetDelitePizza;
ActiveDelegate<PizzaOrder> isStuffed = (pizza) => pizza.Kind == PizzaOptions.StuffedPizza;
return builder
// .Field(nameof(PizzaOrder.Choice))
.Field(nameof(PizzaOrder.Size))
.Field(nameof(PizzaOrder.Kind))
.Field("BYO.Crust", isBYO)
.Field("BYO.Sauce", isBYO)
.Field("BYO.Toppings", isBYO)
.Field(nameof(PizzaOrder.GourmetDelite), isGourmet)
.Field(nameof(PizzaOrder.Signature), isSignature)
.Field(nameof(PizzaOrder.Stuffed), isStuffed)
.AddRemainingFields()
.Confirm("Would you like a {Size}, {BYO.Crust} crust, {BYO.Sauce}, {BYO.Toppings} pizza?", isBYO)
.Confirm("Would you like a {Size}, {&Signature} {Signature} pizza?", isSignature, dependencies: new string[] { "Size", "Kind", "Signature" })
.Confirm("Would you like a {Size}, {&GourmetDelite} {GourmetDelite} pizza?", isGourmet)
.Confirm("Would you like a {Size}, {&Stuffed} {Stuffed} pizza?", isStuffed)
.Build()
;
My big question here is, is it possible to start the conversation with the MessagesController that I used and then in the LobbyDialog, use an Intent that fires a Form and returns it? That is, start a flow from a dialog? Or is better to use DialogChains for that?
Because, from what I tried, it appears that I can ONLY do forms if they are called from teh MessagesController class with the methods I described, that is, how Microsoft sampled it in the Pizzabot.
I appreciate any help or input on the matter. Thanks for your time.
Sure you can! Instantiating a form from a dialog is a pretty common scenario. To accomplish that you can do the following inside the LUIS intent method:
var form = new FormDialog<YourFormModel>(
<ExistingModel>,
<TheMethodThatBuildTheForm>,
FormOptions.PromptInStart,
result.Entities);
context.Call(form, <ResumeAfterCallback>);
using the PizzaBot sample, it should looks like:
var form = new FormDialog<PizzaOrder>(
null,
BuildForm,
FormOptions.PromptInStart,
result.Entities);
context.Call(form, <ResumeAfterCallback>);
In the ResumeAfterCallback you will usually get the result of the form, catch exceptions and perform a context.Wait so the dialog can keep receiving messages. Below a quick example:
private async Task ResumeAfterCallback(IDialogContext context,
IAwaitable<PizzaOrder> result)
{
try
{
var pizzaOrder = await result;
// do something with the pizzaOrder
context.Wait(this.MessageReceived);
}
catch (FormCanceledException<PizzaOrder> e)
{
string reply;
if (e.InnerException == null)
{
reply = "You have canceled the operation. What would you like to do next?";
}
else
{
reply = $"Oops! Something went wrong :(. Technical Details: {e.InnerException.Message}";
}
await context.PostAsync(reply);
context.Wait(this.MessageReceived);
}
}