I'm working on a bot using Jovo4 and Google Actions (not Dialogflow) as NLP provider.
I've created an entity called contactName which has dozens of names and, as you may guess, it's used to identify names out of intents.
The problem is, the Bot has its onw name and it keeps identifying it as a contactName everytime the user mention it casually during the conversation.
How do I prevent the bot from identifying a specific value on an entity?
Is it possible to insert/classify "undesired" values?
Obs1: I didn't add the bot name as a trainning value
Obs2: Fuzzy Matching and the ability to "accept unknown values" is on because I need it.
The best way to handle this is to create an intent intentionally to match these queries and redirect them appropriately, which may include responding to the user and stating what kinds of user queries are valid.
Related
I'm trying to create a custom action through Google Assistant. I have custom user data which is defined by the user and I want the user to ask me something about this data, identifying which data they want to know about by supplying it's name.
ex:
User says "Tell me about Fred"
Assistant replies with "Fred is red"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
The problem I'm having is how to add a Training phrases or re-prompting for the user to use when they supply a name which doesn't exist.
ex:
User says "Tell me about Greg"
Assistant replies with "I couldn't find 'Greg'. Who would you like to know about?"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
I've tried adding a Training response which only contains the 'name' parameter, but then if the user says "Tell me about Fred", the "name" parameter is set to "Tell me about Fred" instead of just "Fred", which means it ignores other Training responses I have setup.
Anyone out there who can be my Obi-wan Kenobi?
Edit:
I've used Alexa for this same project and have sent to Alexa an elicitSlot directive. Can something similar be implemented?
There is no real equivalent to an elicitSlot directive in this case (at least not the way I usually see it used), but it does provide several tools for accomplishing what you're trying to do.
The general approach is that, when sending your reply, you also set an Output Context with the reply. You can set as parameters for the Context any information that you want to retain (what value you're prompting for and possibly other state you've already collected).
Then you can have Intents that have this context set as an Input Context. The Intent will then only be matched if the Context is active. This Intent can match #sys.any, or whatever other Entity type might be appropriate in this case.
One advantage of this approach is that it allows for users to reply more conversationally, or pivot their reply away from the prompting question you've just asked. It allows for users to answer within the Context, or through other Intents that you've already setup for other purposes.
I am writing you to ask a question about Dialogflow fulfillments.
I am trying to create an agent for Google Home and my backend is basically a web hook implemented in TypeScript.
In the conversation that I designed, the user requests to the agent to perform an action, providing a category as paramter. Now, the set of possible categories can vary through time, so I am using the entity type #sys.any to detect the parameter.
My problem is that, when on the fulfillment I try to identify the specific category on which the agent needs to take action, it may be the case that the requested paramter matches multiple cateogries, so I'd need a followup intent to ask the user to clarify which is the actual category it wants to select.
E.g. the conversation could be the following:
Agent: 'Welcome.'
User: 'Do action on **category**'
Agent: 'I have found **categoryA**, **categoryB** and **categoryC**. Please specify which one you want to select.'
User: 'Select the second || Select **categoryB**'
Agent: 'Great, action performed on **categoryB**'
Now, I was able to build this conversation using followup events and contexts: for example I created two followup events, one that detects the numbers and another that detects the text, so the user is driven on one or another depending on what it says (if the user says 'The first', a number is detected and in the backend I cycle the categories selecting the one that is associated to that index. I do a similar operation if the user says "categoryX", but inside a different intent).
What I want to understand is: what is the proper way to achieve that kind of conversation through the Node.js fulfillment API?
Thank you for any help.
From your description - you've done precisely the right thing (although you don't need followup intents).
When you reply with the options the user has, you include a Context that may contain the array of possible results. You then create Intents that have this as an Input Context, match either the index of the array (lets call this the match.index Intent) or by name (the match.name Intent).
In your webhook, the match.index Intent would determine which category was actually chosen, and then call a function that takes care of that category. Similarly, the webhook for match.name would take the parameter with the name and call the same function to take care of that category.
Dialogflow offers a pre-built agent called "Maps" that helps to catch the location from user's statement. This Maps intent resolves the location and returns data such as
City name (when I search for Google, Chicago)
Business name (when I search for Chennai)
subdomain-area (when I search for "where is saidapet")
admin-area (when I search for Schaumburg)
What is the logic behind this Agent?
Is there any schema defined anywhere so that I know which field to expect for a given search?
Is it possible to get lat/long part of this response?
Appreciate any thoughts.
When you use one of these prebuilt agents, you are basically creating a new agent of your own based on a Google-provided samplete/template. Click on the intent definitions and you’ll see what parameter names they map things to. You can also change these names, add parameters, remove parameters, edit the list of example utterances, etc. What you don’t get is example fulfillment code - your on your own to do something useful with the intents Google has provided in these samples.
I am trying to build a bot(custom UI in my website) where a user will enter a product name to view the details of it and I will provide a link to the product full details page. I have a situation where if the user enters a name and there are multiple results from my database, I want to show him those products as quick replies so that he can select one from them.
How do I recognize that the user has entered the product name and anything else? I can use #sys.any, but all small conversation will also go there, which will be of no use.
The same problem occurs when I display him a list of products with matching name. But now when the user clicks on any of the button I am taking him to a custom follow-up intent where I have entered the template for a product entity. But, dialogflow only recognizes the products that have been defined in the entity(listed few products and checked auto expand).
I have tried using #sys.any instead, but the intent is called for any string the user types in. Lets say, the user does not respond and after a while he types in "hi", my intent with any is being called. How do I overcome this situation?
So far as I understand, I can see two ways to solve this query. First, using an entity & defining your product list over there for bot to understand user responses (which you have done) but this will become an overhead when you have a list of say 1000/more products. Second way, you can continue using #sys.any & define a parameter, write a webhook where you validate user entered response to product list in database & check if it is present over there, if yes, show product details or say, entered response is incorrect.
I'm trying to make an agent that can give me details about movies.
For example, the user says "Tell me about (movie-name)", which sends a post request to my API with the (movie-name) which then returns the response.
However, I don't understand how to grab the movie name from the user's speech without creating a movieName entity with a list of all the movies out there. I just want to grab the next word the user says after "tell me about" and store it as a parameter. How do I go about achieving that?
Yes, you must create a movieName entity, but you do not need to create a list of all movies. Maybe you are experienced with Alexa which requires a list of suggested values, but in api.ai you don't need to do that.
I find that api.ai is not very good at figuring out which words are part of a free-form entity like movieName, but hopefully adding enough user expressions will help it with that.
edit: the entity I was thinking of is '#sys.any' but maybe it would be better to use a list of movie names with the 'automated expansion' feature. I haven't tried that, but it sounds like the way that Alexa's custom slots work, which is actually a lot more flexible (just using the list as a guideline) then people seem to think.