Intent Capture and Authenticate (Design of Intents and Calling Webhook) - dialogflow-es

We are designing an app, where it has the following:
Welcome Intent - Typical Greeting Message, when the conversation starts.
Intent 1 - Where is it more of authentication, User would be asked about certain credentials
Intent 2 - The functionality of asking a service like booking some food for home delivery. This intent has a series of mandatory questions with prompts defined which are being captured in parameter values.
Scenario 1 (more of a Happy Scenario):
User starts conversing, App/Agent asks about credentials
User provides credentials: App/Agent - invokes a webhook, verifies (from Intent 1) If not right user, provides a message of getting him/her registered/active
App/Agent - invokes the next starting of Intent 2 and gets values for his questions.
This is fine.
Scenario 2:
User starts conversing, App/Agent asks about credentials
User can always say a query (not the credentials) and put a query which can invoke Intent 2
How can we make sure that the series of questions which is within Intent 2 does not get invoked until & unless Intent 1 is not covered( that user is authenticated). From an user experience standpoint, the solution should not have all the user queries asked and then invokes the Webhook for Intent 2, then say you are not authenticated. That will not all be good User experience?
How do we handle this design problem while configuring dialog flow?

Dialogflow's context feature is meant to control which intents can be matched at what point in the conversation. You can set a output context for intent1 of "loggedin" and add a input context to intent2 with the same value "loggedin".
After intent1 has been matched a context of "loggedin" will be added to the conversations state. intent2 can only be matched when the conversation has the "loggedin" context added to it. This ensures that intent2 is only matched after intent1 has been matched in the conversation. You can see screenshots of this setup below.
More about context in this blog post: https://blog.dialogflow.com/post/how-contexts-and-followup-intents-work/

Related

How to ask "Was this helpful?" in DialogFlow at the end of conversation after rendering the response from Intent

So I have a flow prepared.
User: I would like to book an appointment
Bot: Sure. Does 3pm works for you?
User: Yes
Bot: Great. Appointment has been set. (Response from Fulfillment)
Bot: Anything else you need help with? Yes | No (How to achieve this)
I have tried triggering followupEvent but that won't display any response till the chain of intent is complete.
When the followupEventInput parameter is set for a WebhookResponse,
Dialogflow ignores the fulfillmentText, fulfillmentMessages, and
payload fields. When Dialogflow receives a webhook response that
includes an event, it immediately triggers the corresponding intent in
which it was defined.
I have End Intents ready for response for Yes and No. But need help in triggering it.
An intent shouldn't be used as a step in your flow or be tied to a single response, its intended to represent a category of phrases your user might say to complete a certain goal in your conversation. Since the was this helpful isn't triggered by any user phrase, but more as a trigger for the user to continue the conversation shows that it shouldn't be a separate intent.
Having the was this helpful phrase be available to multiple intents is a good choice so it can be used throughout your conversation, but I would recommend saving this phrase in a file, an API or a CMS and retrieving the response via code.
I'm not a PHP developer, but I expect it to be along the lines of: responseService.getResponse("requestFeedbackPrompt");
This allows you to retrieve the was this helpful phrase throughout your code, without making the mistake of making a seperate intent for it, as this will create problems later on with keeping state.
If you would decide to go with a single intent for this, you will quickly see that it will become difficult to maintain track of context, states and which step of the conversation you are in as multiple intents will go through this generic intent.
What would you do if you need a different variant of the was this helpful response, with the single intent, you will end up creating an intent for each variation and you will have to align the conversation flow and state accordingly every time.
If you use the service, you just call responseService.getResponse("OtherFeedbackPrompt);`
Hi have something similar in one of my bots. I have taken a different approach to those mentioned.
My bot asks if there's anything it can help with at the end of a an acknowledgement fulfilment.
The customer then has the option to respond with Yes or No.
Within the page that asks the question I have created routes.
One route for Yes and another for No.
The Yes route directs customers back to the point where they can start making selections. The No route provides a fulfilment to the customer and ends the session. I have used Yes and No intents for these.

Is there any possibility in triggering the intent without the help of Training phrases?

I have created 5 intents in a Dialog flow. After completion of first intent, it should automatically go to the second intent without the use of the training phase. Is there any possibility to do that?
This probably isn't what you want to do. Remember that Intents capture what the user says or does and not how Dialogflow should respond.
If you want to do a series of things when the user says one thing, then you can do all those things in your fulfillment webhook. Your webhook is where you actually do something based on what the user has said, and this can be handled in one function call or several calls that you make from your Intent Handler.
There are two possibilities either you can use contexts or if you want to handle sequence from webhook service you can use events.
For webhook solution,
Give each Intent a specific event and action.
In your webhook request you will get action of your intent and you can trigger next event based on current action. => Dialogflow
For context solution
You can add Follow up intents for your each intents, Follow-up-intents

Trying to create a follow-up intent to capture contact information

I want to create an intent for personal follow-up with anonymous visitors. I have created a "getfollowup" intent that triggers when visitor asks for escalation, speak to a manager, etc. I want to create yes/no follow-up intents and trigger a "getcontact" intent for "yes" answer. The getcontact intent is created to capture #sys.given-name and #sys.email for slot filling. I'm having trouble getting the two intents to connect. Here's an example of how I'd like the conversation to flow:
...
Visitor: I need to speak to a manager
[getfollowup intent triggered]
Response: Sorry I haven't been able to help. Would you like me to have someone reach out to you?
Visitor: Yes
[getfollowup-yes context]
[need to trigger getcontact intent here...it is this transition that I can't figure out]
Response: Ok. First may have your name?
Visitor: John
Response: Thanks, John. May I have your email address?
Vistor: john#example.com
Response: Thanks for the info. Someone will reach out to you shortly.
In general, you don't "trigger" an Intent. Intents capture what the user says, and not what you do with that.
So the approach in your case would that when the user says "yes", you simply prompt them for the information you want and set a Context indicating you want this information.
You can then create other Intents (such as your "getcontact" Intent) that take this as an Input Context and have the user providing the information you've prompted for.

Manage Timeout with actionon Google and DialogFlow

I'm tryng to create a Chat Bot using DialogFlow with webHooks and Actions on Google.
I need to manage a timeout i.e when the end user did not use the Chat Bot for a configured amount of time, i need to exit from conversation without user interaction, same result as described here but without any input.
conversation-exits
I cannot find info about this automatic triggered action any hint?
Is this possible?
The conversation-exits you are referring is for exiting the Conversation when the user says Cancel, Exit, Stop etc.
To handle No User Interaction, you could do the following:
Create a new Intent and set event = "actions_intent_NO_INPUT"
In the webhook, if this intent is triggered, set the rePrompt count flag and ask for user input.
If the count reaches 2-3 (as desired), end the conversation by using conv.close()
Check out the following page on RePrompts and No Inputs and Best Practices.

How to verify a user's pin when they open the skill (LaunchRequest)

I am having some problems with my Alexa skill. I would like the dialogue to go like this:
User: 'Alexa, open party'
Alexa: 'Hello, what is your four digit secret pin?'
User: '1234'
Alexa: 'Confirmed, what can I help you with?'
But I am confused on how to structure this. I need to take the user's pin and verify it in my codebase. I know you cant get dialogue delegation to work inside of the LaunchRequest. The LaunchRequest can not be customized, so I cannot add slots to it. I can't find any other suggestions/examples on the internet. Has anyone done this before or are there any suggestions?
Amazon supports account linking as the method to connect users with their other accounts. This allows users to log into their other account using OAuth at the time the skill is installed. While it may be possible to determine a user based on the session object userid, it may be difficult to get such a skill published.
It turns out that you can not delegate slot collection to Alexa within the LaunchRequest, because it is not part of a valid response type for LaunchRequest.
My Initial logic was:
User says 'Alexa, open party'
Alexa Skill calls LaunchRequest. (At this point I need to ask the user for their pin by delegating Alexa to do slot colleciton)
In the LaunchRequest, immidiately respond with this.emit(':getPinIntent'); where getPinIntent is another intent existing in my Alexa Skill. The above code is what I saw on the internet for how to call another intent without the user having to provoke using voice.
getPinIntent gets called and immediately it checks to see if all the required slots are filled (i.e. if the slot PIN has a value). If they are not and dialogState !== 'COMPLETED' then I delegate the slot collection to Alexa.
The above step (#4) is where things go wrong. Because delegation is not a valid response type for LaunchRequest's, there is no field dialogueState which is required for delegation to Alexa. The Alexa Request is still a LaunchRequest instead of an Intent request because the user did not invoke the intent by saying something to Alexa.
In conclusion this is not a valid way of completing a dialogue where upon launch the user is asked for a pin and then can reply by only saying that pin, visualized below:
User: "Alexa, open party"
Alexa: "What is your pin" (alexa never gets here, because of #4 and #5 above)
User: "one two three four"
Alexa: "Confirmed, what can I help you with?"
If I have made any mistakes or wrong assumptions please let me know.
My current logic has now changed. If you do not use the Skill Builder Beta you can have a slot exist as an utterance for one of your intents. So I now have getPinIntent with a slot called {PIN} and an utterance in the form of {PIN}. This lets the above type of conversation happen because when the user says his or her pin back ("one two three four") it starts the getPinIntent where I can then continue OR delegate the dialogue to Alexa because for IntentRequest dialog is a valid response type.
The only problem I have now is that because I am not using the Skill Builder Beta I can not (or have not found a way) to add Dialogue Models to/inside of my Intent Schema. I have tried copying the JSON text from the Skill Builder Beta into my Intent Schema after adding the correct Dialogue Model, but this always results in build errors.
So now I can complete the user's pin authentication and respond with a "How can I help", but the IntentRequest that comes after that may require delegation to Alexa for slots, and this would cause a crash because without the Skill Builder Beta I am unable to add the appropriate dialogue models for Alexa to use during delegated slot collection.

Resources