When the user types some random words that doesn't match to any intent, my agent instead of recognizing it as a fall back intent, categorizes it as some particular intent.
And astonishing fact is that such random words get matched as a particular entity. and trust me such words random garbage words are not defined in my entity.
I am unable to find solution for this :(
Check whether "Default Fallback intent" is available in your list of intents or enabled in your agent.
Check all your training examples in your intents properly.
For more Info follow this link
https://docs.api.ai/docs/concept-intents#fallback-intent
Related
Hi so I have a problem.
In Dialogflow, when I get a response to end the chat, I would like to ask the user for ratings.
so I've created 2 intents, "endchat" and "endchat2."
They both have the same training phrases, but it appears only endchat2 is being used (the most recently created intent)
How do I ensure that the chatbot randomly chooses an intent after a given response, instead of only using one intent? They have the same training phrases.
An alternate idea is in the attachments. The problem lies that I want the custom payload to only to appear after one of the text responses, (that being text response #1,) but not appear, if the chatbot decides to use text response #2. This is the reason I decide to make two separate intents, but it looks like that's not helping out because the bot is only using one intent.
Remember, Intents represent what the user says and does and not how you respond to that. So there is no way to "randomly choose an Intent" to use to respond.
What you can do, however, is setup a webhook for that Intent and determine how you wish to respond to what the user says. In some cases, you can thank them and end the conversation, while in others you can thank them, ask them the followup question, and set a Context so you can expect their reply.
Having the same / similar training phrase in multiple intents is an anti-pattern of bot design. Ultimately this confuses the bot and it leads to undefined behavior.
This should also trigger an warning in "Validation" with something like "Multiple intents share training phrases which are too similar:..." on the intents.
I am trying to transition between intents. I have welcome intent and based on user response, I want to either redirect to Search intent or to CheckInternet intent.
I have given output context as search and interconnection in Welcome intent and then given them as input context in relevant intents. But still not able to chain them together.
Unfortunately, I don't have knowledge of Dialogflow yet, as using this for hackathon first time to check its capabilities. Any help would be great
Intents in Dialogflow aren't nodes in a state machine. You don't "transition" between them. Intents reflect what the user says or does.
So, to give your example:
When they start the agent, the welcome Intent is triggered based on the welcome event.
If, at any point, they say "search", then the training phrases in the Search Intent might match, so the webhook or responses for it would be triggered.
Or, if they said "check", then the training phrases in the CheckIntent Intent might match, so the webhook or responses for it would be triggered instead.
If you need to limit under what circumstances these phrases would be accepted by an Intent, you can add a Context and make sure that Context is valid. But you usually only want to add that once you get it responding in the more general case.
You would have to add both Search and CheckInternet as "Follow-up Intents". To do so, create two new Intents and assign the contexts search and interconnection to them respectively as Input Context.
When the user says something that should lead to Search, set search as output context and for the next utterance Search Intent will be considered (if sample utterance match).
I hope that's clear enough, I'm happy to explain that in detail, too. This way I configured a nicely working Chatbot with 20+ Intents once :)
I have created an Intent, which outputs a context with a given parameter name, let's say $myParam. The goal of this Intent is to catch a long sequence of numbers. I know there is a #sys.number-sequence entity but, I'm using Italian language and this kind of entity is not available. There is only #sys.number, but the numbers I'm expecting from the user are out of its range.
Under these restrictions, I choosed #sys.any as entity for my parameter $myParam.
Problem
When the user enters the digits, in a real device, the Assistant might add some white spaces between them (while the user says them).
When the Assistant gets the sequence 111 222, the Intent is triggered and everything goes OK.
But, when the Assistant gets the sequence 111222 (note the missing of the white space) it doesn't work.
I was expecting that #sys.any entity catches all inputs but it doesn't look like that.
Do you know how to deal with this case?
My goal is to trigger the intent even when the Assistant catches the sequence of digits without space between, before or after the sequence.
Image:
https://ibb.co/ngBzGtx
I faced this problem in the recent days and it was really annoying. Suddenly, for some reason that I don't know, Assistant's #sys.any entity was not working any more for catching numbers.
My use case is pretty much as yours, I have a parent Intent where I ask the user to enter a code (10-15 digits), and I have created a follow-up intent to handle user's input. I'm using a language different from english, and the only entity that the system offers for catching long numbers is #sys.any.
But it stopped working! I came around to find a way to somehow force the assistant to enter in a specific intent, because not only the follow-up intent isn't triggered now, but the fallback intent either. Assistant just holds on in the parent intent and goes to crash.
After I spent some hours finding nothing useful, I tried this trick which worked for me.
When creating an intent, by default it has a Normal priority. Changing the priority of the follow-up intent, that I want to be triggered with the parameter of entity type #sys.any holding the user's input, to High solved my issue. Now it's working correctly as it used to work before.
The #sys.any entity generally shouldn't be used to cover everything in the phrase. For cases like this, you should be able to use a Fallback Intent and then process the entire input from the user.
I have created a pizza bot in dialogflow. The scenario is like..
Bot says: Hi What do you want.
User says : I want pizza.
If the user says I want watermelon or I love pizza then dialogflow should respond with error message and ask the same question again. After getting a valid response from the user the bot should prompt the second like
Bot says: What kind of pizza do you want.
User says: I want mushroom(any) pizza.
If the user gives some garbage data like I want icecream or I want good pizza then again bot has to respond with an error and should ask the same question. I have trained the bot with the intents but the problem is validating the user input.
How can I make it possible in dialogflow?
A glimpse of training data & output
If you have already created different training phrases, then invalid phrases will typically trigger the Fallback Intent. If you're just using #sys.any as a parameter type, then it will fill it with anything, so you should define more narrow Entity Types.
In the example Intent you provided, you have a number of training phrases, but Dialogflow uses these training phrases as guidance, not as absolute strings that must be matched. From what you've trained it, it appears that phrases such as "I want .+ pizza" should be matched, so the NLU model might read it that way.
To narrow exactly what you're looking for, you might wish to create an Entity Type to handle pizza flavors. This will help narrow how the NLU model will interpret what the user will say. It also makes it easier for you to understand what type of pizza they're asking for, since you can examine just the parameters, and not have to parse the entire string again.
How you handle this in the Fallback Intent depends on how the rest of your system works. The most straightforward is to use your Fulfillment webhook to determine what state of your questioning you're in and either repeat the question or provide additional guidance.
Remember, also, that the conversation could go something like this:
Bot says: Hi What do you want.
User says : I want a mushroom pizza.
They've skipped over one of your questions (which wasn't necessary in this case). This is normal for a conversational UI, so you need to be prepared for it.
The type of pizzas (eg mushroom, chicken etc) should be a custom entity.
Then at your intent you should define the training phrases as you have but make sure that the entity is marked and that you also add a template for the user's response:
There are 3 main things you need to note here:
The entities are marked
A template is used. To create a template click on the quote symbol in the training phrases as the image below shows. Make sure that again your entity is used here
Make your pizza type a required parameter. That way it won't advance to the next question unless a valid answer is provided.
One final advice is to put some more effort in designing the interaction and the responses. Greeting your users with "what do you want" isn't the best experience. Also, with your approach you're trying to force them into one specific path but this is not how a conversational app should be. You can find more about this here.
A better experience would be to greet the users, explain what they can do with your app and let them know about their options. Example:
- Hi, welcome to the Pizza App! I'm here to help you find the perfect pizza for you [note: here you need to add any other actions your bot can perform, like track an order for instance]! Our most popular pizzas are mushroom, chicken and margarita? Do you know what you want already or do you need help?
I am facing an issue whereby words that does not match with any intents, it will assume it belongs to intent with the most labeled utterances.
Example: if
Intent A consists of utterances such as Animals
Intent B consists of utterances such as Fruits
Intent C consists of utterances such as Insects
Intent D consists of utterances such as People Name
Desired: If the random word(s) does not fit into any of the luis intent, it will fit into none luis intent. Example of desired: If word such as "emotions" or "clothes" were entered, it will match as "None" intent.
Actual: When user type random word(s), it match with luis intent with highest number of labeled utterances. If word such as "emotions" was entered, it will match as "A" intent as intent A consist of highest number of labeled utterances.
Please advise on the issue.
Set a score threshold, below which your app won't show any response to the user (or could show a "sorry I didn't get you" message instead). This avoid responding to users with anything LUIS is unsure about, which usually takes care of a lot of "off topic" input too.
I would suggest setting it your threshold between 0.3 and 0.7, depending on the seriousness of your subject matter. This is not a configuration option in LUIS, rather in your code you just do:
if(result.score >=0.5) {
// show response based on intent.
} else {
// ask user to rephrase
}
On a separate note, it looks like your intents are very imbalanced. You want to try and have roughly the same number of utterances for each intent, between 10 and 20 ideally.
So without more details on how you've built your language model, most likely the underlying issue is that you either don't have enough utterances in each intent that have enough variation displaying the different ways in which different utterances could be said for that particular intent.
And by variation I mean different lengths of the utterance (word count), different word order, tenses, grammatical correctness, etc. (docs here)
And remember each intent should have at least 15 utterances.
Also, as stated in best practices, do did you also make sure to include example utterances in your None intent as well? Best practices state that you should have 1 utterances in None for every 10 utterances in the other parts of your app.
Ultimately: build your app so that your intents are distinct enough with varying example utterances built into the intent, so that when you test other utterances LUIS will be more likely able to match to your distinct intents--and if you enter an utterance that doesn't follow any sort of pattern or context of your distinct intents, LUIS will know to detect the utterance to your fallback "None" intent.
If you want more specific help, please post the JSON of your language model.