Is it possible for users to choose between a few intents for Bot Framework Composer (LUIS)? - bot-framework-composer

I'm trying to get something like this Stack Overflow question but within Bot Framework Composer. In Power Virtual Agents, if the bot is not sure about which 'Topic' aka 'Intent' is the right one, it gives the user a few options. How can I achieve that in Bot Framework Composer, or at least extend the bot with code?

In your dialog, create a trigger for "Duplicated intents recognized" :
Duplicated intents recognized automatically sets some variables for the turn that you use to customize your logic in how to handle duplicate intent recognition.
From here I refer to the Enterprise Assistant template :
conversation.lastAmbiguousUtterance = turn.activity.text
You don't really need to use this, but it's set in the Enterprise template in case you want to use the user input in the bot's response
dialog.candidates =
=take(sortByDescending(where(flatten(select(turn.recognized.candidates,
x, if (x.intent=="ChooseIntent", x.result.candidates, x))), c,
not(startsWith(c.intent, "DeferToRecognizer_QnA")) && c.score >
turn.minThreshold), 'score'), turn.maxChoices)
This basically organizes the duplicate intents into a list, dialog.candidates, by
sorting the intent recognized in descending order (so first element is the highest recognition score intent)
filtering out intents starting with "DeferToRecognizer_QnA" that are automatically generated from cross-training
filtering out intent scores that don't meet the minimum threshold that you set
getting only the number of intents for max choices that you set
From here you can set your logic so that if
count(dialog.candidates) = 0
, you emit an UnknownIntent event, or emit a recognizedIntent for
=first(dialog.candidates).result
if your dialog.candidates has at least one result.
Or, you can customize your logic to handle however which way you want, which in your case is inputting dialog.candidates in an adaptive card so the user can choose which intent they wanted.

Related

Using #Sys.Any Entity For my Chat bot / Assistant Service(Design Issue)

I am trying to develop a chatbot / google assistant service for food ordering service, and I currently have it designed this way:
There is a dynamic menu list that will be fetched through an API every time the user asks for menu (new Order Intent)
Then menu categories name list will be displayed
Then the user sends the category name
The second follow up intent (selected category intent) catches it and fetches the food items in the category
Then user sends the food item name
Then next follow up intent (selected item intent) catches it and asks for quantity.
The problem here is since it is dynamic list I can not make use of custom entity and slot filling and train it, so i am currently using #sys.any entity. getting the category name from user and checking if it is present in the menu list from the webhook, if present display item list. if not present check spelling or enter correct menu category and reenter prompt. then here since the "selected category intent" is already consumed so whatever i type now is taken as "item name" instead of "category"
I am preventing this by matching output context from "selected category intent" fulfillment and input context in the "selected item intent". But there are problems with this approach such as once a category is selected I can not go back and change that, and it only works 5 times(lifespan of parent intent context) before going to fallback intent
I know this is really bad design but is there any way to make this better?
Any way to say if the user enters a wrong category name, no do not consume this intent yet go back and get the right category name?
Or if the user selects a category or item by mistake. any way yo go back to that previous intent and do that again?
A few observations that may help:
Use Session Entities
Since you are loading the categories and menu dynamically, you can also set Entities for these dynamically. To do this, you'll use Dialogflow's API to create Session Entities that modify the Entity you have defined. Then you can train your Intent with phrases that use this Entity, but you'll dynamically modify the Entity when they start the conversation.
Don't use Followup Intents
Followup Intents are useful in very limited circumstances. Once you start chaining Followup Intents, it is usually a sign that you're trying to force the conversation to go in a particular way, and then you'll run into problems that you have when the conversation needs to take a slight turn.
Instead, go ahead and use top-level Intents for everything you're trying to do.
"But," I hear you asking, "How do I then make sure I handle the category selection before the menu selection?"
Well, to do that you can...
Use Contexts
You were on the right track when you said you were matching Output Context. You can not only match it, but go ahead and control which Contexts are set in your webhook. So you can use Input Contexts to narrow which Intent is matched at any state of your conversation, but only set the Output Context in your webhook fulfillment to determine which Contexts are valid at any stage of the conversation. You can clear Contexts that are no longer valid by setting their lifespan to 0.
So under this scheme:
When you tell them the categories, set the "expectCategory" context.
The "selected category" Intent is set to require the "expectCategory" Input Context.
In the handler for this context
You'll tell them the menu
Set the "expectMenu" context
Clear the "expectCategory" context
Most of all, remember...
Intents represent what the user says, and not how you react to what they say.

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Dialogflow required parameters

In my dialogflow chatbot i am creating i have a scenario where a user can ask what are the available vacancies you have or they can directly ask i want to join as a project manager or something. Both are in the same intent called "jobs" and the position they want is a required parameter. If user don't mention the position (eg - "what are the available vacancies you have" ) it will list all available vacancies and minimum qualifications need for that vacancy and ask user to pick one (done with slotfilling for webhook.). Now since the intent is waiting for the parameter when user enter the position they like it will provide the details regarding that position. But even when user is trying to ask for something else (trying to call to a another intent or they don't have enough qualifications for that vacancy or the needed job is not listed with the available job list) since that parameter (the Job position) is not provided it ask again and again what is the position you want.
how do i call to another intent when the chatbot is waiting for a required parameter
There is a separate intent for "The job i want is not here". If i typed the exact same one i used to train that intent then i works. but if it is slightly different then it won't work
Try this:
make your parameter as "NOT" required by unchecking the required checkbox.
keep webhook for slot filling.
in the webhook, keep a track if the parameter is provided or not.
if the intent is triggered, check programmatically for parameter and ask the user to provide it by playing with the contexts.
if the user said something else, then there will be no "required" parameter as per Dialogflow and it will not ask repeatedly to provide the parameter.
Let me know if this helped.

Parameter value filling with quick responses in messenger

I have created a bot using Dialogflow (api.ai) and integrated it with Facebook messenger. I want to get the parameter values from user: like city, date (today, tomorrow) by using the quick reply feature of messenger, where user is presented with select-box like options, and can tap on one of the options. The required parameter receives the user-tapped value, saving the user from typing it manually.
I cannot find anywhere in documentation any way to fill up parameter values (slots) using quick replies. There is an option to give quick replies in response section, but the response section is called on fulfilment, and if I take user input in response, then I have to create another follow up intent to process the user-response further, because the current intent gets fulfilled after response.
If I add quick replies in the response section, then I have to create multiple levels of follow-up intents. Ex: I take city input in one intent, and give two options to user (like New York, Delhi). Then I have to create two follow up intents, each for handling one reply (New York and Delhi), and then for each follow up intent, I will have to create more follow up intents to get more parameter inputs. Below is the flow diagram of this case. --->
This can get pretty complex when more levels are added! Amazon Lex has this feature of filling slots using quick replies. Can't I just fill up parameter values directly using the quick replies like Lex?
You don't have to go this far. There is a simple way of using entities & prompts in dialogflow.com. The workflow can be: Weather(intent)->quick reply(New york/Delhi)->City(intent) use entities here->quick reply(Today/Tomorrow)->Use different intents here for today & tomorrow as you will have different responses. You don't need to create different intents unless you have different responses. User says can have different parameters for which you can define different prompts as well. This will again reduce your complexity of creating follow-up intents. Let me know if you need more explanation on this.

BotFramework : is it possible to combine LUIS intents and normal intents?

I'm currently taking my first steps into chatbots with the Microsoft Botframework for NodeJS.
I've so far seen 'normal' intents and LUIS.ai intents
Is it possible to combine the two?
I've had an .onDefault intent that wasn't a LUIS one and a LUIS intent but no matter what the input was it always returned the output of the LUIS intent.
Could someone give me a quick example or point me to one?
Thanks in advance
It is possible to combine LUIS intents and normal intents. To do this we'll use two IntentRecognizers; LuisRecognizer and RegExpRecognizer.
let pizzaRecognizer = new builder.LuisRecognizer('YOUR-LUIS-MODEL');
let mathRecognizer = new builder.RegExpRecognizer('MathHelp', /(^mathhelp$|^\/mathhelp$)/i);
Now let's create our IntentDialog and configure its options...
let intents = new builder.IntentDialog({ recognizers: [mathRecognizer, pizzaRecognizer], recognizeOrder: 'series' })
By combining our pizzaRecognizer and mathRecognizer into a list, we can pass this list to our 'recognizers' property so IntentDialog uses both recognizers. The last property we're going to fiddle with is 'recognizerOrder', its default is 'parallel'. By changing the value to 'series', the IntentDialog will now trigger our RegExpRecognizer 'mathRecognizer' first. If a match with a score of 1 exists, the LuisRecognizer will not be used, saving a wasted LUIS endpoint hit.
I would like to reiterate, if you are trying to use RegExpRecognizers to speed up a chatbot's response and reduce the amounts of LUIS calls your chatbot makes, you need to pass in those recognizers first to your recognizers list. Then you need to set your recognizerOrder to 'series'. Without setting your order to series, your chatbot will continue to perform LUIS calls. Also note that any matched intent must have a score of 1.0 to prevent the other recognizers from being employed. To encourage perfect matches, you should use the RegExp quantifiers ^ and $ to define clear start and ending points for your patterns to match against. (See mathRecognizer for an example)
If accuracy is your primary priority, then you should not change the value of 'recognizerOrder', which will then employ all the recognizers at once.
I've built an example here for you to examine. I included the Luis model as well, named LuisModel.json.

Resources