Dialogflow - Google Assistant: #sys.any entity not catching sequence of digits - dialogflow-es

I have created an Intent, which outputs a context with a given parameter name, let's say $myParam. The goal of this Intent is to catch a long sequence of numbers. I know there is a #sys.number-sequence entity but, I'm using Italian language and this kind of entity is not available. There is only #sys.number, but the numbers I'm expecting from the user are out of its range.
Under these restrictions, I choosed #sys.any as entity for my parameter $myParam.
Problem
When the user enters the digits, in a real device, the Assistant might add some white spaces between them (while the user says them).
When the Assistant gets the sequence 111 222, the Intent is triggered and everything goes OK.
But, when the Assistant gets the sequence 111222 (note the missing of the white space) it doesn't work.
I was expecting that #sys.any entity catches all inputs but it doesn't look like that.
Do you know how to deal with this case?
My goal is to trigger the intent even when the Assistant catches the sequence of digits without space between, before or after the sequence.
Image:
https://ibb.co/ngBzGtx

I faced this problem in the recent days and it was really annoying. Suddenly, for some reason that I don't know, Assistant's #sys.any entity was not working any more for catching numbers.
My use case is pretty much as yours, I have a parent Intent where I ask the user to enter a code (10-15 digits), and I have created a follow-up intent to handle user's input. I'm using a language different from english, and the only entity that the system offers for catching long numbers is #sys.any.
But it stopped working! I came around to find a way to somehow force the assistant to enter in a specific intent, because not only the follow-up intent isn't triggered now, but the fallback intent either. Assistant just holds on in the parent intent and goes to crash.
After I spent some hours finding nothing useful, I tried this trick which worked for me.
When creating an intent, by default it has a Normal priority. Changing the priority of the follow-up intent, that I want to be triggered with the parameter of entity type #sys.any holding the user's input, to High solved my issue. Now it's working correctly as it used to work before.

The #sys.any entity generally shouldn't be used to cover everything in the phrase. For cases like this, you should be able to use a Fallback Intent and then process the entire input from the user.

Related

Two different intents with the same training phrases - DialogFlow. How to ensure both intents get used

Hi so I have a problem.
In Dialogflow, when I get a response to end the chat, I would like to ask the user for ratings.
so I've created 2 intents, "endchat" and "endchat2."
They both have the same training phrases, but it appears only endchat2 is being used (the most recently created intent)
How do I ensure that the chatbot randomly chooses an intent after a given response, instead of only using one intent? They have the same training phrases.
An alternate idea is in the attachments. The problem lies that I want the custom payload to only to appear after one of the text responses, (that being text response #1,) but not appear, if the chatbot decides to use text response #2. This is the reason I decide to make two separate intents, but it looks like that's not helping out because the bot is only using one intent.
Remember, Intents represent what the user says and does and not how you respond to that. So there is no way to "randomly choose an Intent" to use to respond.
What you can do, however, is setup a webhook for that Intent and determine how you wish to respond to what the user says. In some cases, you can thank them and end the conversation, while in others you can thank them, ask them the followup question, and set a Context so you can expect their reply.
Having the same / similar training phrase in multiple intents is an anti-pattern of bot design. Ultimately this confuses the bot and it leads to undefined behavior.
This should also trigger an warning in "Validation" with something like "Multiple intents share training phrases which are too similar:..." on the intents.

Redirecting from one intent to another in dialogflow

I am trying to transition between intents. I have welcome intent and based on user response, I want to either redirect to Search intent or to CheckInternet intent.
I have given output context as search and interconnection in Welcome intent and then given them as input context in relevant intents. But still not able to chain them together.
Unfortunately, I don't have knowledge of Dialogflow yet, as using this for hackathon first time to check its capabilities. Any help would be great
Intents in Dialogflow aren't nodes in a state machine. You don't "transition" between them. Intents reflect what the user says or does.
So, to give your example:
When they start the agent, the welcome Intent is triggered based on the welcome event.
If, at any point, they say "search", then the training phrases in the Search Intent might match, so the webhook or responses for it would be triggered.
Or, if they said "check", then the training phrases in the CheckIntent Intent might match, so the webhook or responses for it would be triggered instead.
If you need to limit under what circumstances these phrases would be accepted by an Intent, you can add a Context and make sure that Context is valid. But you usually only want to add that once you get it responding in the more general case.
You would have to add both Search and CheckInternet as "Follow-up Intents". To do so, create two new Intents and assign the contexts search and interconnection to them respectively as Input Context.
When the user says something that should lead to Search, set search as output context and for the next utterance Search Intent will be considered (if sample utterance match).
I hope that's clear enough, I'm happy to explain that in detail, too. This way I configured a nicely working Chatbot with 20+ Intents once :)

Getting the output context from the fallback intent in fulfillment

I've got a few intents. They all just use a single fallback intent and this fallback intent has the webhook enabled.
In the fallback function what I was hoping to do is switch on the output context and then determine what should happen next depending on which intent the fallback came from.
But the line
var context = request.body.queryResult.outputContexts;
When debugged to the console gets output:
[ { name: 'projects/xxxxproj-xxxx/agent/sessions/xxxxxx-xxxxxx-xxxxx-xxxxx/contexts/xxxxxxx-context' } ]
For the switch statement i just want the last bit with the xxxxx-context. Am I going to have to split that up to get the output context?
In the "Diagnostic Info" section I am a bit surprised there is no reference to the intent from which the fallback came and the only way to work it out seems to be using the outputcontext but as show above that is quite a long string.
Thanks
Yes, the context name is just the last part of that path. Most libraries will take care of that for you, but if you're working with the JSON directly, you need to do this yourself.
There is no reference to "the Intent from which the fallback came" because this isn't quite the model of what an Intent is. Intents represent what the user has said or done, not the current state of the conversation or where you are in the conversation. That current state is represented by Contexts, should you choose to set them.
In that sense, how you use the contexts can vary. They can store parameters, so are a good way to keep information between rounds of a conversation, and you can use them the way you are - to see what state the conversation is in general. But they also take on additional uses when defining Intents.
In an Intent definition, the Intent will only be triggered if all the Contexts listed in the Input Context field are set (ie - have a lifespan greater than 0). Dialogflow uses this when it makes followup Intents, for example, and it is common so you can do things such as have "help" trigger different Intents based on Context. In an Output Context, it will automatically capture all of the parameters specified in the Intent, including those filled in by the user's response, so this can be an easy way to remember what the user has said from round to round.
To answer your question in the comments - it doesn't specifically say which Intents were previously triggered, or which most recently, although if you're consistent in how you use your Output Contexts and what lifespan you give them, you can use it this way. What it does say is in what state your conversation is in, which is generally much better anyway.
Remember - Intents represent what a user has said or done. It doesn't represent anything else about the conversation. Only the state of the system represents that, and one tool we have to control that state is through Contexts.

how to validate user expression in dialogflow

I have created a pizza bot in dialogflow. The scenario is like..
Bot says: Hi What do you want.
User says : I want pizza.
If the user says I want watermelon or I love pizza then dialogflow should respond with error message and ask the same question again. After getting a valid response from the user the bot should prompt the second like
Bot says: What kind of pizza do you want.
User says: I want mushroom(any) pizza.
If the user gives some garbage data like I want icecream or I want good pizza then again bot has to respond with an error and should ask the same question. I have trained the bot with the intents but the problem is validating the user input.
How can I make it possible in dialogflow?
A glimpse of training data & output
If you have already created different training phrases, then invalid phrases will typically trigger the Fallback Intent. If you're just using #sys.any as a parameter type, then it will fill it with anything, so you should define more narrow Entity Types.
In the example Intent you provided, you have a number of training phrases, but Dialogflow uses these training phrases as guidance, not as absolute strings that must be matched. From what you've trained it, it appears that phrases such as "I want .+ pizza" should be matched, so the NLU model might read it that way.
To narrow exactly what you're looking for, you might wish to create an Entity Type to handle pizza flavors. This will help narrow how the NLU model will interpret what the user will say. It also makes it easier for you to understand what type of pizza they're asking for, since you can examine just the parameters, and not have to parse the entire string again.
How you handle this in the Fallback Intent depends on how the rest of your system works. The most straightforward is to use your Fulfillment webhook to determine what state of your questioning you're in and either repeat the question or provide additional guidance.
Remember, also, that the conversation could go something like this:
Bot says: Hi What do you want.
User says : I want a mushroom pizza.
They've skipped over one of your questions (which wasn't necessary in this case). This is normal for a conversational UI, so you need to be prepared for it.
The type of pizzas (eg mushroom, chicken etc) should be a custom entity.
Then at your intent you should define the training phrases as you have but make sure that the entity is marked and that you also add a template for the user's response:
There are 3 main things you need to note here:
The entities are marked
A template is used. To create a template click on the quote symbol in the training phrases as the image below shows. Make sure that again your entity is used here
Make your pizza type a required parameter. That way it won't advance to the next question unless a valid answer is provided.
One final advice is to put some more effort in designing the interaction and the responses. Greeting your users with "what do you want" isn't the best experience. Also, with your approach you're trying to force them into one specific path but this is not how a conversational app should be. You can find more about this here.
A better experience would be to greet the users, explain what they can do with your app and let them know about their options. Example:
- Hi, welcome to the Pizza App! I'm here to help you find the perfect pizza for you [note: here you need to add any other actions your bot can perform, like track an order for instance]! Our most popular pizzas are mushroom, chicken and margarita? Do you know what you want already or do you need help?

Entity over-generalisation on Api.ai

We’ve been having a great deal of difficulty with chatbot entities over-generalising on Api.ai, i.e. returning values that have not been specified for that entity when using the “Define Synonyms” feature on custom entities, even when the “Allow automated expansion” flag is turned off.
Our key example is an entity we use for confirming a user choice called confirm_accept. We had an entry: “that’s it”, with synonyms: “thats it”, “that is it”, “that’s it thanks”, “thats it thanks”, “that is it thanks”. This entity value was being returned unexpectedly in expressions where just a stray “it” was appearing.
In general, we have seen a lot of inappropriate entity generalisation which seems to indicate there is some form of stop word removal and stemming/lemmatization going on during entity identification... and which can’t be turned off.
This returns poor entity classifications, making it difficult to create entities for which very precise values are important, e.g. where a single word or character can make a big difference in meaning. Our key use case involves a lot of address processing, so it is important we get back only values we have specified.
Types of over-generalisations we’ve seen include:
inappropriate identification of determiners (a, an, the, this, that, etc.) as part of entities: as in “it” returning “that’s it”
stemmed words: as in stray mentions of “driving”, returning “drive” (a valid street type entity)
inappropriate plural stems: a stray mention of “children” returning “child”, or a stray “will” returning “wills” (which in our case “child” and “wills” are street name entities, so we don’t want “children” or “will” to be returned)
This is currently making it difficult to create a production quality chatbot using the Api.ai service.
Anyone had more luck at either getting a response from Api.ai or solving the over-generalisation problem?
Entities are meant to extract information from conversation:
API.AI's entities are meant to be used to extract data from conversational input not parse different phrases and parts of speech. For your examples (that’s it, thats it, that is it, that’s it thanks, thats it thanks, that is it thanks) all seem to indicate that the user's intent is to indicate that the last message from the API.AI agent was correct. For instances like these, it would be best to use these phrases as examples for an intent or an existing intent with other responses indicating that the user wants to indicate that the last response was correct.
API.AI captures entity tenses and plurals automatically: To address your other concern (driving entity, returning drive value, children returning child, or wills returning wills): API.AI intentionally captures different tenses and plurals of entities to provide a better experience for users who many not know the exact entities you've entered in your database. This allows users of your conversational app to have a natural conversation with your users and not require precise wording or

Resources