How to build a Dialogflow agent with open questions? - dialogflow-es

I'm new to Dialogflow, I ask for your help to try to build an open question.
I'm doing a customer satisfaction survey and I want to end it with a question like "Lastly, I'd like to know if you have any feedback on your experience with this process"
Here the user can write anything, often I want to thank him and end the conversation.
My flow is the following:
The open question is asked in the response of the intentions:
Q3_opinionservice -custom - yes
Q3_opinionservice -custom - no
Thank you
Receive a text of an open question and end the conversation

You may try and consider below approach.
In this approach, you will add a Custom follow-up intent on each of your Q3_opinionservice -custom - yes and Q3_opinionservice -custom - no.
The Custom follow-up intent will only contain one training phrase (any random sentences you want) and then annotate it as sys.any entity in which it matches any non-empty input as mentioned in this Dialogflow System Entities documentation.
My sample custom follow-up intent:
My sample conversation output:

Related

How to ask the user to add new element in DialogFlow?

I am trying to build a bot in Dialogflow.
Here is what I need:
customer: Hello
bot: hello, what's your name?
customer: John
bot: Please enter the first element.
customer: element1
bot: Did you finish?
customer: No
bot: Please enter the second element.
....
Please advise how can I implement it? I am trying to create an intent with action and prompt but the agent doesn't ask me "Please enter the first element".
I also need to make first, second .. a counter that updates with each iteration / question.
Can you please advise where can I find a guideline how to achieve this kind task?
So far I have created an agent and playing with intents.
One way would be to write some code for fulfillment (using webhook or even inline editor), analyze incoming messages in your code and generate answer.
If you don't want to write any code, it should be also possible to achieve this using Dialogflow's context to store some information and followup intents to continue asking for elements. But in case you would like to ask user for multiple elements - it could be hard to maintain in Dialogflow. I have created and tested sample bot this way with following intents:
Please note that I have removed default Welcome intent to not interfere with custom "hello" intent.

( Alexa ) Is it possible to get the response in same Intent Handler?

I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Change default message when assisstant misunderstands user

I have created a google action, which takes in three parameters, I have done training phrases for many word combinations, but sometimes it will not pick it up.
I set my input parameters in the dialog flow to number1, number2, and number3.
It seems by default, if it misses a value it will say: "what is $varName"
however, this could be misleading to users since it may be unclear if it just prompts the user for 'what is number3'.
Id like to edit this response to be a more descriptive message.
I hope this is clear enough - I cant really post any code since its all concerning this dialogflow ui...
cheers!
If you want to add prompt variations for capturing parameters in an entity follow the "adding prompt variation" explained here. Just add variations to prompts as below or handle it from webhook by enabling slot-filling for webhook.
If you want to ask questions when the agent did not understand the intent then you can either use a Default Fallback Intent for a generic reply or create a follow-up fallback intent for the intent you are targetting.
or

Dialog flow default response set to follow-up no

I'm new to dialog flow and was trying to build a conversational chatbot. The following is the example I'm working with.
I created an intent "Q1" with question 1 as the user input. Later, I added a follow up yes and no intent for "Q1". When I test it, though it gives the correct answer for yes and no, I noticed that when I enter "thank you" after asking question 1, I get the output intent as Q1-no. Is there an explanation why the default is Q1-no instead of small talk?
Your dialogflow follow-up intent for NO has default user says added as thanks but no & so when you enter Thank you, it matches it with user says in intents, compute a threshold & check how much percent it matches with a user-entered query. If it is higher than the threshold value you have set in your agent's ML settings, then it gives response for that intent. The solution to your problem is either disable ML from follow-up intent for NO or remove thanks but no user expression from that intent.
snap-1
snap-2
snap-3
snap-4: Output: Follow-up intent is not called after I removed thanks but no user expression.

Resources