I have created a bot using Dialogflow (api.ai) and integrated it with Facebook messenger. I want to get the parameter values from user: like city, date (today, tomorrow) by using the quick reply feature of messenger, where user is presented with select-box like options, and can tap on one of the options. The required parameter receives the user-tapped value, saving the user from typing it manually.
I cannot find anywhere in documentation any way to fill up parameter values (slots) using quick replies. There is an option to give quick replies in response section, but the response section is called on fulfilment, and if I take user input in response, then I have to create another follow up intent to process the user-response further, because the current intent gets fulfilled after response.
If I add quick replies in the response section, then I have to create multiple levels of follow-up intents. Ex: I take city input in one intent, and give two options to user (like New York, Delhi). Then I have to create two follow up intents, each for handling one reply (New York and Delhi), and then for each follow up intent, I will have to create more follow up intents to get more parameter inputs. Below is the flow diagram of this case. --->
This can get pretty complex when more levels are added! Amazon Lex has this feature of filling slots using quick replies. Can't I just fill up parameter values directly using the quick replies like Lex?
You don't have to go this far. There is a simple way of using entities & prompts in dialogflow.com. The workflow can be: Weather(intent)->quick reply(New york/Delhi)->City(intent) use entities here->quick reply(Today/Tomorrow)->Use different intents here for today & tomorrow as you will have different responses. You don't need to create different intents unless you have different responses. User says can have different parameters for which you can define different prompts as well. This will again reduce your complexity of creating follow-up intents. Let me know if you need more explanation on this.
Related
I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa
I have a basic lead gen bot under which I have 2 services (in 2 different intents) for which I am collecting leads. Under both of them I am collecting the name, email, and phone number and I also have checked the required tick boxes.
It's working as expected when I am just availing/submitting lead for a single service. However, if in the same interaction I also want to go for the second service the bot is again asking for the name, email & phone number which it already has from my interaction for the first service. How do I make sure that it doesn't ask for the details if it already has them?
I also do not mind handling it programmatically using fulfillment but I could not find any documentation.
Any help is highly appreciated
you can use the user storage (https://developers.google.com/actions/assistant/save-data)
or alternatively you can try to link the parameters of the two intents to the same context parameters. Set your parameter value like this #context_name.param_name
I was able to do this by setting an output context in the first intent & using the input context in the second intent.
The trick was to assign a default value to the parameters in the second intent as "context_name.param"
I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.
I am writing you to ask a question about Dialogflow fulfillments.
I am trying to create an agent for Google Home and my backend is basically a web hook implemented in TypeScript.
In the conversation that I designed, the user requests to the agent to perform an action, providing a category as paramter. Now, the set of possible categories can vary through time, so I am using the entity type #sys.any to detect the parameter.
My problem is that, when on the fulfillment I try to identify the specific category on which the agent needs to take action, it may be the case that the requested paramter matches multiple cateogries, so I'd need a followup intent to ask the user to clarify which is the actual category it wants to select.
E.g. the conversation could be the following:
Agent: 'Welcome.'
User: 'Do action on **category**'
Agent: 'I have found **categoryA**, **categoryB** and **categoryC**. Please specify which one you want to select.'
User: 'Select the second || Select **categoryB**'
Agent: 'Great, action performed on **categoryB**'
Now, I was able to build this conversation using followup events and contexts: for example I created two followup events, one that detects the numbers and another that detects the text, so the user is driven on one or another depending on what it says (if the user says 'The first', a number is detected and in the backend I cycle the categories selecting the one that is associated to that index. I do a similar operation if the user says "categoryX", but inside a different intent).
What I want to understand is: what is the proper way to achieve that kind of conversation through the Node.js fulfillment API?
Thank you for any help.
From your description - you've done precisely the right thing (although you don't need followup intents).
When you reply with the options the user has, you include a Context that may contain the array of possible results. You then create Intents that have this as an Input Context, match either the index of the array (lets call this the match.index Intent) or by name (the match.name Intent).
In your webhook, the match.index Intent would determine which category was actually chosen, and then call a function that takes care of that category. Similarly, the webhook for match.name would take the parameter with the name and call the same function to take care of that category.
Hi I am creating slot filling chatbot where I would like to ask as much open question as possible at the beginning to make my flow the most similar to the normal conversation.
How can I achieve such a thing in terms of intents. Should I create three separate intents for case 1,2 and 3 and adding context flows in terms of 2nd and 3rd? Please help
For a case like this, you can do them as a single simple Intent that has prompts for parameters/slots that aren't addressed by the user.
First, we'll need a simple vehicle Entity type. Something like this, perhaps:
Once we have that, we can create an Intent and give it a few sample phrases:
We then need to do a few additional things (which are illustrated in the picture by the two orange arrows):
We need to mark the parameters as Required.
We need to give some Prompts for each parameter
With these, the Intent won't be complete until the user has responded to each prompt to fill the parameters. Once the user has given all the required values, it will call your fulfillment.