Dialogflow parameter and entities - dialogflow-es

I wanted to know if it possible for dialog flow to store value of parameter without set the entities for it. For example The bot ask the user What is your name? and the user respond with Jack. So dialogflow will store value “John” in the parameter. Thank you.

Dialogflow Web UI
So when working with the Dialogflow Web UI your options are a bit more limited compared to fulfillment and you won't really get around using entities. In the Web UI the best way to extract values from the user input is by using entities and parameters. The only way of using "raw" input via the Web UI is by using the #sys.any entity, but you should be careful with this.
The #sys.any entity takes the complete input from the user and gives it to you, but it doesn't provide any information on what entity the input might be off. For instance, if you ask the user "What is you name?" the user might respond with "John", but they also could respond by saying "Oh.. uhh. My name is John" and if you use #sys.any you get the whole string and you have to detect what is a name and extract it from the input yourself. Entities and parameters do this for you.
You can use the input from the user in your response by using $parameterName in your response
Dialogflow Fulfillment
When working with code the issues with raw input remain the same, you will get the whole user input, but have to do recognition or regexing to retrieve the values yourself.
One benefit of working with fulfillment is that you always have access to the raw input, you can call agent.query to retrieve the raw input, so you are not required to use a #sys.any entity in your parameter setup.
Conclusion
So as I mentioned in the above, there are a couple ways of retrieving the raw input of the user, but in both cases you lose the automatic detection provided by entities and parameters when you do so. While at first it might seem a hassle to work with entities and parameters, if you are going to use the user input for anything, like saving the name or making a decision, I really recommend sticking to the entity approach because it automatically detect the input from the string, you don't have to worry about how the user answers your question, which is a big part of developing a bot.
There are very few cases where using raw input has made developing easier for me in the long run.

Related

Entity Extraction/Validation in Bot Composer

I have a composer text input task with the following settings. I'm making a bot testing the capabilities of Luis integrations with a bot. The issue I'm facing is not the entity recognition itself; rather the entity validation on the input task for the entity. I only notice this when using a text input task and have the entity validation working for other tasks (specifically datetime).
Current Entity User Input Task
To illustrate my issue, here are two current behaviors of the bot:
WAI: I say something like "I want to make an appointment and my contact number is (800)-234-5678". This not only triggers my intent model and skips the user input question where I ask for the user's phone number (seeing as they provided one already) and the conversation.phoneNumber variable is (800)-234-5678
Not WAI: I say something like "I want to make an appointment and my contact number is abc". This triggers my intent model as desired but skips the user input question where I ask for the user's phone number (because it thinks they provided one already) and the conversation.phoneNumber variable is abc. Ideally, this is where I would think validation would happen and it would ask the question "What is a good phone number for our office..." seeing as abc isn't a phone number.
For reference, this is the documentation I'm following:
https://learn.microsoft.com/en-us/composer/how-to-define-advanced-intents-entities
I considered a regex validator as an option because phone number is such a trivial entity that can be easily defined, but something more complex/diverse (such as geographic location, currency, etc) would be better handled by whatever dictionaries Microsoft has in place for built-in entities. I can see the raw data when I type in an utterance that shows no entity is being picked up (so SOMETHING is working in the background). I'm just curious if I can make use of that functionality in a bot composer task.
As I said before, this is working fine when validating if a user entered a date/time on a separate task; I'm hoping there's a feature for this for other built-in, text-based entities in the bot composer.
I assume that whatever built-in entity recognition that Microsoft has for their built-in entities here https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-entities is vastly superior to whatever regex I can come up with, and some type of built-in validation would be handy for more complex bots.

Response based on user input, dynamic reponses

Good day,
I am currently trying to create an intent that is able to put out responses depending on the user input. (The chatbot should be implemented on a website later)
Let's say we have an entity called cars with three entries: "Volkswagen" "Audi" "Ford".
Now when the user types in something with e.g. Audi in it, the response will correspond to this. Something like this: If Audi then give this response, if Ford then this response.
I couldn't find anything helpful yet.
Thank you in advance!
Remember that Intents represent what the user says and not how you handle this or how you reply. Although Dialogflow does offer the ability to respond to Intents, these aren't based on specific values that may appear in parameters.
There are, however, a variety of ways you can handle what you're doing, based on the rest of what you're trying to do.
Multiple Intents
One solution is to create an Intent for each type of thing that the user may talk about. You could then put the response you want for each in the response section of that Intent.
This is probably a bad approach, but may be useful in some ways. It requires you to duplicate phrases between the different Intents, which leads to a lot of duplication. On the upside, it does let you vary the replies, and truly represents the Intent of what the user is trying to say.
Using Parameters with Fulfillment
A better approach is to have a single Intent with many phrases representing what the users can ask. These phrases would have parameters of your Entity Type.
You can then enable Fulfillment for this Intent and write a Fulfillment webhook for the Intent that would look a the value of the parameter and send back an appropriate reply.
Using Parameters with DetectIntent
Since your ultimate goal is to embed this on a website, it may be more appropriate to have your website show something different based on what the user has said. (For example to show a picture of the car in another pane or links to different pages, to use your example of cars.)
In this case, your chat client (or a proxy) would be calling the DetectIntent API. You can structure your Intent similar to above, with parameters of the Entity Type, and the reply that is sent to your client would contain the Intent along with the value of the parameter. Your client then can check the value of the parameter and change the display accordingly.

How to create a search form with dialogflow

I am trying to make a search algorithm with dialogflow that could take any combination of: first name, address, phone number, zip code or city as input to a search algorithm. The user does not need all of them, but we will refine our search with each additional answer until we only have one result. Basically we are trying to identify which customer we are talking to.
How should this type of intent (or set of intents) be structured? We have tried one intent with multiple parameters, but we do not need all of them to be required. We have also written a JavaScript function for fulfillment but how can we communicate back to dialogflow as to whether we need more information?
Thank you very much for your help.
Slot filling is designed for this purpose.
Hope that helps.
Please post more code/details to help answers be more specific.
First, keep in mind that Intents reflect what the user is saying, and not typically what you're replying with or what other information you need. Slot filling sometimes bends this rule, but only if you have required slots.
Since you don't - you need a different approach.
This can be done with a single intent, although you may find that multiple intents make it easier in some ways. The approach is broadly the same:
When you ask the question, make sure you set an Outgoing Context with a relatively short lifespan (2-3 is good) to indicate you are collecting user info.
Create an Intent (or Intents) that have sample phrases that capture the information you need.
Some of these will have obvious entity types (phone number and zip code) while others will be more difficult (First name has a system entity type, but it doesn't include all possible first names).
You will need to create sample phrases that collect the parameters by themselves, along with phrases that make sense. You're the best judge of this, and you should probably write some sample conversations before you write the phrases.
In your fulfillment, you'll figure out if you have enough information.
If you do, you can reply and clear the Context that was set. (Clearing it is important so Dialogflow doesn't match the information collecting Intent again.)
If you do not, you can add the information you have as parameters to the Context so you can save it for later processing, make sure you reset the Context lifespan (so it doesn't expire), and prompt the user for additional information. Again, having a conversation mocked out ahead of time will help here.

how to validate user expression in dialogflow

I have created a pizza bot in dialogflow. The scenario is like..
Bot says: Hi What do you want.
User says : I want pizza.
If the user says I want watermelon or I love pizza then dialogflow should respond with error message and ask the same question again. After getting a valid response from the user the bot should prompt the second like
Bot says: What kind of pizza do you want.
User says: I want mushroom(any) pizza.
If the user gives some garbage data like I want icecream or I want good pizza then again bot has to respond with an error and should ask the same question. I have trained the bot with the intents but the problem is validating the user input.
How can I make it possible in dialogflow?
A glimpse of training data & output
If you have already created different training phrases, then invalid phrases will typically trigger the Fallback Intent. If you're just using #sys.any as a parameter type, then it will fill it with anything, so you should define more narrow Entity Types.
In the example Intent you provided, you have a number of training phrases, but Dialogflow uses these training phrases as guidance, not as absolute strings that must be matched. From what you've trained it, it appears that phrases such as "I want .+ pizza" should be matched, so the NLU model might read it that way.
To narrow exactly what you're looking for, you might wish to create an Entity Type to handle pizza flavors. This will help narrow how the NLU model will interpret what the user will say. It also makes it easier for you to understand what type of pizza they're asking for, since you can examine just the parameters, and not have to parse the entire string again.
How you handle this in the Fallback Intent depends on how the rest of your system works. The most straightforward is to use your Fulfillment webhook to determine what state of your questioning you're in and either repeat the question or provide additional guidance.
Remember, also, that the conversation could go something like this:
Bot says: Hi What do you want.
User says : I want a mushroom pizza.
They've skipped over one of your questions (which wasn't necessary in this case). This is normal for a conversational UI, so you need to be prepared for it.
The type of pizzas (eg mushroom, chicken etc) should be a custom entity.
Then at your intent you should define the training phrases as you have but make sure that the entity is marked and that you also add a template for the user's response:
There are 3 main things you need to note here:
The entities are marked
A template is used. To create a template click on the quote symbol in the training phrases as the image below shows. Make sure that again your entity is used here
Make your pizza type a required parameter. That way it won't advance to the next question unless a valid answer is provided.
One final advice is to put some more effort in designing the interaction and the responses. Greeting your users with "what do you want" isn't the best experience. Also, with your approach you're trying to force them into one specific path but this is not how a conversational app should be. You can find more about this here.
A better experience would be to greet the users, explain what they can do with your app and let them know about their options. Example:
- Hi, welcome to the Pizza App! I'm here to help you find the perfect pizza for you [note: here you need to add any other actions your bot can perform, like track an order for instance]! Our most popular pizzas are mushroom, chicken and margarita? Do you know what you want already or do you need help?

Entity over-generalisation on Api.ai

We’ve been having a great deal of difficulty with chatbot entities over-generalising on Api.ai, i.e. returning values that have not been specified for that entity when using the “Define Synonyms” feature on custom entities, even when the “Allow automated expansion” flag is turned off.
Our key example is an entity we use for confirming a user choice called confirm_accept. We had an entry: “that’s it”, with synonyms: “thats it”, “that is it”, “that’s it thanks”, “thats it thanks”, “that is it thanks”. This entity value was being returned unexpectedly in expressions where just a stray “it” was appearing.
In general, we have seen a lot of inappropriate entity generalisation which seems to indicate there is some form of stop word removal and stemming/lemmatization going on during entity identification... and which can’t be turned off.
This returns poor entity classifications, making it difficult to create entities for which very precise values are important, e.g. where a single word or character can make a big difference in meaning. Our key use case involves a lot of address processing, so it is important we get back only values we have specified.
Types of over-generalisations we’ve seen include:
inappropriate identification of determiners (a, an, the, this, that, etc.) as part of entities: as in “it” returning “that’s it”
stemmed words: as in stray mentions of “driving”, returning “drive” (a valid street type entity)
inappropriate plural stems: a stray mention of “children” returning “child”, or a stray “will” returning “wills” (which in our case “child” and “wills” are street name entities, so we don’t want “children” or “will” to be returned)
This is currently making it difficult to create a production quality chatbot using the Api.ai service.
Anyone had more luck at either getting a response from Api.ai or solving the over-generalisation problem?
Entities are meant to extract information from conversation:
API.AI's entities are meant to be used to extract data from conversational input not parse different phrases and parts of speech. For your examples (that’s it, thats it, that is it, that’s it thanks, thats it thanks, that is it thanks) all seem to indicate that the user's intent is to indicate that the last message from the API.AI agent was correct. For instances like these, it would be best to use these phrases as examples for an intent or an existing intent with other responses indicating that the user wants to indicate that the last response was correct.
API.AI captures entity tenses and plurals automatically: To address your other concern (driving entity, returning drive value, children returning child, or wills returning wills): API.AI intentionally captures different tenses and plurals of entities to provide a better experience for users who many not know the exact entities you've entered in your database. This allows users of your conversational app to have a natural conversation with your users and not require precise wording or

Resources