I am in the process of creating an agent that will handle call requests via speech. For example, here is what the flow looks like:
1). User says: I need to call John
2). The agent grabs John as the parameter and via fulfillment it queries a database for all the entries that contains John in a certain field. If there is more than one John, a follow up intent is triggered and sends a response asking which John is the desired one:
Agent says: There are several Johns, who do you wish to call? John Test, John Smith, John Pleis or John Schmidt?.
3). The user wants to get in touch with John Pleis.
User says: John Pleis
Here is where I'm having problem. Dialogflow is recognizing John Please, instead of John Pleis. How can I handle this?
Update
Here is how the intent looks:
-- INITIAL INTENT --
-- FOLLOW UP INTENT --
You should be able to address these by using your own Entity Types for the names instead of using the System Entity Type of #sys:any. This lets you specify the possible names that would be accepted and Dialogflow can work with the assistant to better understand what the user might be saying. This isn't perfect, but can improve phrase detection, and can provide you with some tools to help it out to make detection even better.
If your directory is relatively small (a few hundred people, perhaps), you can simply create Developer Entity Types up front for all the names. (There is even an API for managing these Entity Types, so you can automate it.)
If you have too many names, you may want to just create Developer Entity Types for the possible first names (or use the System Entity Type of #sys:given-name if that is suitable enough) and then, as part of your fulfillment webhook, populate a Session Entity Type with the possible names that match.
In either of these cases, you can also use entity aliases to help improve matching. So if you see that "John Please" is still matching, then you can set this up as an alias for "John Piels" and Dialogflow will report this as "John Piels" for that Entity.
Related
I am new in google action and I am trying to implement google action for my aquarium shop app
And I need to response for delivery available location queries
so I added available city details in Training phrases but my problem is if anyone asked with any counter name it is responding delivery is available in $geo-country but I need to replay if $geo-countery is not India then sorry, we don't provide delivery in geo-country How to do this?
Making an else case through the UI in Dialogflow isn't the easiest way. The easiest way to show different result for certain types of values would be to use fullfillments. With fullfillments you can handle the interaction of an intent through code. For small projects Dialogflow provides a code editor in which you can put an if statement that would show a different response for this intent.
An example of how to setup an intent which works with parameter input using fullfillments can be found here
Using the inline editor you can write logic that will check if user mentioned India as a parameter for your intent and then change the response to what you want.
The best way I found to create and if/ else type of response in dialogflow is by using sys:any specially if you already have an entity defined. Just create for example
Phrase: Do you deliver in Canada and mark canada with an entity so all of the values in that entity will be matched as valid delivery countries.
then create another phrase: Do you deliver in India and mark india with the sys:any entity. Indicating that any other value outside of your entity values will be a non valid parameter.
Then create two text responses. One will say yes we deliver in $parameter-name and the other will say no we dont deliver to $parameter-name
$parameter-name = the name you want to use for your variables.
Hopes this help
I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.
I am writing you to ask a question about Dialogflow fulfillments.
I am trying to create an agent for Google Home and my backend is basically a web hook implemented in TypeScript.
In the conversation that I designed, the user requests to the agent to perform an action, providing a category as paramter. Now, the set of possible categories can vary through time, so I am using the entity type #sys.any to detect the parameter.
My problem is that, when on the fulfillment I try to identify the specific category on which the agent needs to take action, it may be the case that the requested paramter matches multiple cateogries, so I'd need a followup intent to ask the user to clarify which is the actual category it wants to select.
E.g. the conversation could be the following:
Agent: 'Welcome.'
User: 'Do action on **category**'
Agent: 'I have found **categoryA**, **categoryB** and **categoryC**. Please specify which one you want to select.'
User: 'Select the second || Select **categoryB**'
Agent: 'Great, action performed on **categoryB**'
Now, I was able to build this conversation using followup events and contexts: for example I created two followup events, one that detects the numbers and another that detects the text, so the user is driven on one or another depending on what it says (if the user says 'The first', a number is detected and in the backend I cycle the categories selecting the one that is associated to that index. I do a similar operation if the user says "categoryX", but inside a different intent).
What I want to understand is: what is the proper way to achieve that kind of conversation through the Node.js fulfillment API?
Thank you for any help.
From your description - you've done precisely the right thing (although you don't need followup intents).
When you reply with the options the user has, you include a Context that may contain the array of possible results. You then create Intents that have this as an Input Context, match either the index of the array (lets call this the match.index Intent) or by name (the match.name Intent).
In your webhook, the match.index Intent would determine which category was actually chosen, and then call a function that takes care of that category. Similarly, the webhook for match.name would take the parameter with the name and call the same function to take care of that category.
I have been reading about Dialog Flow and there is one thing that is still unclear for me. I'll try to give an example.
I want to implement a conversion as following:
User: Hello Google, what are some interesting cities?
Bot: Hello there! Sydney, New York and Berlin are nice.
User: Could you tell more about the second city?
Bot: Sure. New York is amazing. In New York, you can ...
As you see, I am building a data context. After the first question, we should remember that we answered Sydney, New York and Berlin, so we understand what the second city actually means in the second question.
Should we store this data in the webhook service or is this stored in a context in Dialog Flow? If we have to store such data in the webhook service, how can we distinguish between different ongoing conversations?
Storing it in a Dialogflow Context is an ideal solution - this is exactly what Contexts were made for! You phrased your question using the same term, and this is no coincidence.
Conceptually, you might do this with a setup like this:
User: What are some interesting cities?
Dialogflow sees no contexts and matches an Intent asking for cities.
Agent replies: Sydney, New York, and Berlin are nice.
Agent sets context "cities" with parameter "cities" -> "Sydney, New York, Berlin"
User: Tell me more about the second one?
Dialogflow has an Intent that expects an incoming context of "cities" with a text pattern like "Tell me more about the (number index) one?" It sends the request to that Intent along with the currently active contexts.
Agent get a parameter with the index and the context "cities". It looks up the parameter for it, turns the string into an array, and gets the city based on the index.
Agent replies: New York is a fun place to visit!
Agent sets context "city" with parameter "current" -> "New York"
User: Tell me more!
Dialogflow matches this phrase and that the "city" context is still active and sends it to an event that reports more.
Agent says: More awesome stuff about New York.
User: Tell me about that first city instead.
Dialogflow matches it against the same intent as before.
Agent says: Sydney is pretty cool.
Agent changes the "city" context so the parameter "current" -> "Sydney" and "previous" -> "New York".
You can now create other intents that handle phrases like "Compare these two" or "tell me more about the other one".
Update
This setup strikes a good balance between what Dialogflow does well (parse messages and determine the current state of the conversation) and what your webhook does well (determine the best answers to those questions).
You could probably do much of that inside Dialogflow, but it would start to get very very messy very quickly. You would need to create multiple Intents to handle the results from each value individually, which doesn't scale. You'd also need to create a Context for each city (so you'd have a "city_ny" and "city_sydney" Context), since you can only match on the presence of a Context, not the parameters it might have.
Using the webhook (even the built-in fulfillment system that we now have) will likely work much better.
I am building a google-assistant application with api.ai that delivers data that has been aggregated over a date-period via a webhook.
It is common for people to ask for date periods using the word "since", for instance:
"What is the data since last monday" (tuesday - now)
or the even trickier:
"What is the data since last year". (ambiguous reference to date-period)
Can api.ai parse these date-periods, or is it necessary to identify if the intent request is of a special "relative" type and then construct the date-period manually?
You will probably want to use something like the #sys.date-period pre-defined entity.
For example, if you create an Intent with a "User says" with parameters such as:
and a response:
and then enter in some queries like:
These might not be exactly what you need, so you may need to craft more of you own. If so, check out the #sys.date pre-defined entity, which may do some of the work for you, and the complete list at https://docs.api.ai/docs/concept-entities#section-date-and-time