My LUIS model used to return all intent scores when queried. However, now it only returns the top intent, entities and sentiment analysis.
I have "Include all predicted intent scores" checked on and I publish and still am only getting back topScoringIntent, entities, and sentimentAnalysis.
We're working on a new feature that is going to take advantage of the score breakdown of all intents and this is blocking us.
"Include all predicted intent scores" is merely UI setting by -> you may see changing the url value in "Keys and Endpoints" section in column "Endpoint" as you can see below.
If you would like to have all intents and scores you need to use following url when executing http GET towards LUIS:
https://yourlocation.api.cognitive.microsoft.com/luis/v2.0/apps/yourAppId?subscription-key=yourSubscriptionKey&verbose=true&timezoneOffset=-360&q=Test
test
If you omit
verbose=true
in the query string or set its value to false then you get only topScoringIntent and eventually sentiment if its detection is turned on.
Related
I had my agent working fine in the Dialogflow console, hitting on my intent training phrases where one of the entity/parameter matchings was a staff name entity. I would run it through the console and it would display the Action Parameter and value correctly. It's been around a week and I when back to the console to add additional training phrases. I simply copied one of the training phrases and pasted it into the "Try it Now" field and now the action no longer hits on the staff parameter.
Any ideas on why I can have training phrases hit on the parameter, but the "Try it now" the the exact same phrase can not?
I tried making changes to the Entity and run it through the learning phase.
I added a new training phrase and it did automatically pick up the staff entity. Saved it as well and allowed training to complete. Pasted in the exact training phrase to the "Try it now" and again no hit on the action parameter.
Dialogflow console screen shot
The Diagnostic info basically just showed what the console "Try it now" window showed.
I'm trying to build a basic "question/answer" app in Actions using DialogFlow. Right now I have two intents:
Intent 1: User says "Ask me a question" and the intent responds "Tell me about yourself"
Intent 2: I'd like to capture the user response to "tell me about yourself", but frankly there's no way to write enough training phrases to cover it.
I tried following this suggestion, and having Intent 1 send an output context called save_response and Intent 2 has an input context of save_response. Then for the training phrase I used #sys.any:save_response
When I try this action, it just invokes the default fallback intent every time. Thoughts on where I might be going wrong?
You need to create 2 intents, in the first intent your training phrase would be Ask me a question, output context will be save_response and response will be the question which you want to throw at the user.
Then in intent 2, you need to do following:
Set input context to save_response, so that it will only be
triggered when this is present in the contexts
Go to actions and parameters section and create a parameter named
answer, give entity type as #sys.any
Then go to training phrases section and add any training phrase, then
highlight it all, and select the parameter you just created
After that, your training phrases and entity section will be looking
like something like below image
Save the intent and you are done
Hope it helps.
In general, having an Intent with a training phrase that consists only of #sys.any may not always work as you expect.
Better would be to have a Fallback Intent that has the Input Context set to make sure you only capture things in that state (save_response in your case) and then to use the full text captured in your fulfillment.
When doing it this way, you do not need the "Intent 2" you described - or rather, this would be a Fallback Intent that you create in the Dialogflow UI. The Fallback Intent is triggered if no other Intent would match what the user has said.
To create a Fallback Intent, select the three dots in the upper right of the Dialogflow UI
then select "Create Fallback Intent"
The Fallback Intent editor is very similar to the normal Intent editor. The biggest difference is that the phrases you enter (and you don't need to enter any) will explicitly not match this Intent, and there are no parameters. Other aspects (the name, the Incoming Context, turning on fulfillment) are the same.
I have been working lately with Dialogflow to make chatbots to do some simple tasks. For instance with webhooks and youtube api where the user ask to show him a video and then the bot just answers with the youtube video url.
E.G.
USER SAYS
Show me Neil young harvest moon
AGENT SAYS
Here you go : https://www.youtube.com/watch?v=n2MtEsrcTTs
I do this by using a custom Entity I called "YoutubeQuery" I checked "Allow Automated expansion" and unchecked "Define Synonyms" then I just added 2 values "Kavinsky Night Call" and "Indigo Night Tamino"
In my Intent I just made a couple of training phrases like these:
And everything works.
Now my issue is with a new Agent which I called Orders
I want just to get Order Id's from the firestore database, but before getting there I'm running in kind of a huge problem
I defined the order's ID entity just like the one with the youtubeQuery. And I added some example Order ID's , I want them all to Start with OD and have 4digits after example (DX0001,DX0009,DX9999)
Afterwards I made the intent
Now unless I give the EXACT order ID's from the traininphrase or the ID examples I defined in the Entity it will always give me a response with an empty parameter OrderID
I start my intent by saying "my order" then I get prompted with "What is your ID?
So when I give an ID that has not been used in the training phrases of the Intent I get an empty value in the parameters like this:
But when I give an ID that has been used in the training phrases like for instance the first one DX0808 it does work...
How can I make this work without adding all the possible order id's ranging from DX0001 to DX9999 in the training phrases or the entity.
I mean it does work for my youtube query, I can put anything there it does "catch" the value. Any help please?
It looks like the required parameter is the problem here, my suggestion would be to:
Create intent to get the order id in one sentence without reprompt (turn off required on the order id) and id is always present, ex: "my id is DX0402". Include training response where only ID is provided like "DL3932", ex. below:
Set other intent for scenario when customer wants to provide the id but it is missing, for ex. customer says: "my id" and make your bot ask for the id as an response ex. "ok, provide me your id"
If you do it, in case user doesn't provide the id, intent 2 will be triggered and after id is provided you'll trigger intent 1.
Hope this makes sense.
I created a chatbot which informs the user about the names of the members of my (extended) family and about where they are the living. I have created a small database with MySQL which has these data stored and I fetch them with a PHP script whenever this is appropriate depending on the interaction of the user with the chatbot.
For this reason, I have created two intents additionally to the Default Fallback Intent and to the Default Welcome Intent:
Names
Location_context
The first intent ('Names') is trained by phrases such as 'What is the name of your uncle?' and has an output context. The second intent ('Location_context') is trained by phrases such as 'Where is he living?', 'Where is he based?', 'Where is he located?' 'Which city does he live in?' etc and has an input context (from 'Names').
In general, this basic chatbot works well for what it is made for. However, my problem is that (after the 'Names' intent is triggered) if you ask something nonsensical such as 'Where is he snowing?' then the chatbot will trigger the 'Location_context' intent and the chatbot will respond (as it is defined) that 'Your uncle is living in New York'. Also let me mention that as I have structured the chatbot so far this kind of responses are getting a score higher than 0.75 which is pretty high.
How can I make my chatbot to trigger the Default Fallback Intent in these nonsensical questions (or even in more reasonable questions such as 'Where is he eating?' which are not however exactly related with the 'Location context' intent) and not trigger intents such as the 'Location_context' which simply contain some similar keywords to it such as the word 'Where'?
Try playing around with ML CLASSIFICATION THRESHOLD in your agent settings (Settings > ML Settings). By default it comes with a very low score (0.2), which is a little aggressive.
Define the threshold value for the confidence score. If the returned
value is less than the threshold value, then a fallback intent will be
triggered or, if there is no fallback intents defined, no intent will
be triggered.
You can see the score for your query in the JSON response:
{
"source": "agent",
"resolvedQuery": "Which city does he live at?",
"metadata": {
"intentId": "...",
"intentName": "Location_context"
},
"fulfillment": {
"speech": "Your uncle is living in New York",
"messages": [{
"type": 0,
"speech": "Your uncle is living in New York"
}]
},
"score": 0.9
}
Compare the scores between the right and wrong matches and you will have a good idea of which confident score is the right one for your agent.
After changing this settings, let it train, try again, and adjust it until it meets your needs.
Update
For queries that still will get a high score, like Where is he cooking?, you could add another intent, custom fallback, to handle those false positives, maybe with a custom entity: NonLocationActions, and use the template mode (#) in user expressions.
where is he #NonLocationActions:NonLocationActions
which city does he #NonLocationActions:NonLocationActions
So these queries will get 1 score in the new custom fallback, instead of getting 0.7 in the location intent.
I am working on a chatbot using dialogflow and am getting similar problems.
Our test manager invented the 'Sausage Test' where she replaces certain words in the question with the word sausage and our bot fell apart! Even with a threshold of 0.8 we still regularly hit issues where intents fire for nonsensical sentences, and with an enterprise level chatbot that is giving out product installation advice we could not afford to get it this wrong.
We found that in some cases we were getting max confidence levels (1) for clearly dodgy 'sausaged' input.
The way we have got round this issue is to back all the answers onto an API and use the confidence score in conjunction with other tests. For example we have introduced Regular Expression tests to check for keywords in the question, together with parameter matching (making sure that key entity parameters were also being passed through in the data from DialogFlow).
More recently we have also started to include in the reply a Low Confidence sentence at the start of the reply i.e. 'I think you are asking about XYZ, but if not please rephrase your question. Here is your answer'. We do this when all our extra tests fail and we have a threshold between 0.8 and 0.98.
I'm currently taking my first steps into chatbots with the Microsoft Botframework for NodeJS.
I've so far seen 'normal' intents and LUIS.ai intents
Is it possible to combine the two?
I've had an .onDefault intent that wasn't a LUIS one and a LUIS intent but no matter what the input was it always returned the output of the LUIS intent.
Could someone give me a quick example or point me to one?
Thanks in advance
It is possible to combine LUIS intents and normal intents. To do this we'll use two IntentRecognizers; LuisRecognizer and RegExpRecognizer.
let pizzaRecognizer = new builder.LuisRecognizer('YOUR-LUIS-MODEL');
let mathRecognizer = new builder.RegExpRecognizer('MathHelp', /(^mathhelp$|^\/mathhelp$)/i);
Now let's create our IntentDialog and configure its options...
let intents = new builder.IntentDialog({ recognizers: [mathRecognizer, pizzaRecognizer], recognizeOrder: 'series' })
By combining our pizzaRecognizer and mathRecognizer into a list, we can pass this list to our 'recognizers' property so IntentDialog uses both recognizers. The last property we're going to fiddle with is 'recognizerOrder', its default is 'parallel'. By changing the value to 'series', the IntentDialog will now trigger our RegExpRecognizer 'mathRecognizer' first. If a match with a score of 1 exists, the LuisRecognizer will not be used, saving a wasted LUIS endpoint hit.
I would like to reiterate, if you are trying to use RegExpRecognizers to speed up a chatbot's response and reduce the amounts of LUIS calls your chatbot makes, you need to pass in those recognizers first to your recognizers list. Then you need to set your recognizerOrder to 'series'. Without setting your order to series, your chatbot will continue to perform LUIS calls. Also note that any matched intent must have a score of 1.0 to prevent the other recognizers from being employed. To encourage perfect matches, you should use the RegExp quantifiers ^ and $ to define clear start and ending points for your patterns to match against. (See mathRecognizer for an example)
If accuracy is your primary priority, then you should not change the value of 'recognizerOrder', which will then employ all the recognizers at once.
I've built an example here for you to examine. I included the Luis model as well, named LuisModel.json.