Trigger one intent at the end of another - dialogflow-es

Sorry - very newbie question. I have a number of separate intents (let’s call them intent1, intent2, intent3, etc) which constitute a basic FAQ chatbot.
I want users to be able to trigger these independently but I’d also like to guide them from one to the next. So I’d like to be able, at the end of responding to intent1 to ask ‘would you like to hear about intent2 or ask another question’ and respond appropriately.
So far I’ve not messed with node backends etc so there is a possibility the answer lies there.

You don't need to use a fulfillment webhook, but it does make things somewhat easier.
First, remember that Intents handle what the user says, and not what you do with that. Dialogflow's responses appear to suggest they do, but once you get into more complicated interactions (where two different things from the user need to respond the same way), you find that the response section becomes less useful, and you should store your responses in code.
Those responses should include the prompt about the next question.
During fulfillment you should also set a Context (and possibly clear older contexts) to keep track of which question you are suggesting for them next.
This way - the next response will be triggered by two possible Intents:
One that directly asks a question.
In these cases, you'll use the Intent or action name to determine which question was asked, and provide an answer (and followup prompt).
One that responds "yes".
For this, you'll get the Context that includes information about the question you prompted them for, and provide that answer (and followup prompt).
While the "Followup Intent" feature sounds tempting, it is likely not what you want to use, since it does not allow multiple ways to access it and forces a very narrow path.
You may also wish to take a look at Thinking For Voice: Design Conversations, Not Logic for more about designing your conversation (and how to model it in Dialogflow, in a followup article).

okay, I am late here! Yes, It is possible with the event. I have recently done this.
function helloIntent(agent){
agent.add("Hi, how are you ?");
agent.setFollowupEvent({ name: 'NextIntentEvent', parameters: {} }); // this will do the trick
}
app.js
let intentMap = new Map();
intentMap.set("Hello Intent", helloIntent);
NextIntentEvent should be an event name defined in the intent that you want to trigger.
some code removed for brevity

If you want to make chain of conversation there are few options for that.
Slot filling
Here you need to add your questions as prompt and you can make that optional so if user wants to make the conversation they proceed by answering that question. Example
Contexts
You can set the follow-up question with contexts, Example
Events
Events are something that you can trigger from your web hook once you send the response of your current question,
To trigger the event, Example
POST Authorization: Bearer <AccessToken>
https://dialogflow.googleapis.com/v2/projects/<ProjectID>/agent/sessions/<SessionID>:detectIntent
{
"queryInput": {
"event": {
"name": "event-name",
"parameters": {
"parameter-name-1": "parameter-value-1",
"parameter-name-2": "parameter-value-2",
...
},
"languageCode": "en-US"
}
}
}

Related

How to Trigger a Different Intent in Dialogflow V1

I am well aware that Dialogflow V1 is being deprecated at the end of May 2020. However, I am wondering does anybody know how to trigger an intent in Dialogflow via webhook fulfillment? I have google searched the past few days looking everywhere and there seems to be a consensus that while events are available to trigger intent matching, they shouldn't be used. Right now, I have javascript function that is sending a webhook response with context out. I put that context into my dialogflow intent context input but when I run the agent, the intent is never triggered.
Javascript code:
function createTextResponse() {
let response = {
"speech": "Nice! Let's keep going.",
"displayText": "displayed response",
"contextOut": [
{
"name": "trythis",
"lifespan": 5
}
]
}
return response;
}
Here is a picture with my contexts
contexts in dialogflow
Been having a hard time with this lately and would appreciate any help/explanation in order so that I can move forward.
First, remember that Intents represent what the user says or does and not how you react to them. So in general it doesn't make sense to "trigger an Intent". This is why the advice usually is to not use events - while they do make some sense in limited cases, those cases usually represent a user event and not your program trying to do something.
If you want your fulfillment to do something - just do it. Multiple Intent handlers can all call the same function to respond the same way.
Setting a Context does not automatically trigger an Intent. Setting the Input Context for an Intent restricts under what conditions that Intent can be triggered. While it still requires one of the training phrases to be matched, it also requires that all the Input Contexts be active.
This is exactly what events are provided for. Just invoke an event from your webhook.

( Alexa ) Is it possible to get the response in same Intent Handler?

I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa

dialogflow : Don't understand how can i retrieve the context by coding

I've something that I don't succeed to understand.
Here the situation I would like to do :
Bot: Hello, what do you want to do ?
User: Search a product
Bot: Which
product are you looking for ?
User: Apple
Bot -> list of products
matched with apple
here is a fragment code :
function searchProduct() {
agent.add('Which product are you looking for ?');
// receive the product answer
//-> then research the matched product in DB
}
const intentMap = new Map();
intentMap.set('I want a product', searchProduct);
agent.handleRequest(intentMap);
In this code, I ask to user the product that he's looking for.
But when he answered "Apple", how can I receive the user response in the same function to continue my process ?
I know there is the "context" concept, but to continue the "search product" process, I need to come back in the function.
For now, I use dialog-fulfillment. And I try to understand this documentation to find the solution :
https://github.com/dialogflow/dialogflow-fulfillment-nodejs/blob/master/docs/WebhookClient.md
The short answer is that you can't (or, at the very least, shouldn't) do it in the "same" function. Each function represents an Intent, or what the user has communicated to us. In the function we need to do the following:
Determine what the user has said that is important to us.
Compute anything based on what they've said.
Send a reply to the user based on (1) and (2).
Once we have sent the reply to the user - that round of the conversation is over. We need to wait for the next Intent to be triggered by the user so we can repeat the above.
Contexts are used so we know which stage of the overall conversation we're in. As part of our reply (step 3 above), we can set a Context which will help Dialogflow determine which Intent should be triggered (and thus which function should be called to process what we know so far). Contexts can also store information about previous turns of the conversation.
Keep in mind that Intents aren't about what we say, but are about what the user says. The reply we send is based on what we need, and then we would use a single Intent to capture each part. The function that handles that Intent would store the answer in the Context and determine the next part of the question.

how do I trigger an intent using DialogFlow?

I have a conversation with dialogflow to select a favourite type of drink and then depending on the category of drink there are follow up questions (ie follow up intents).
Under the intent tab I have the following intents:
Default Welcome Intent
Favourite drink Intent
Coffee Intent
follow up
Soft Drink Intent
follow up
Juice Intent
follow up
I use the training phrase in the Favourite Drink Intent and ask:
"What is your favourite drink?"
And store the response in an entity #drink.
But I don't know how then to trigger the intent "soft drink", "juice" or "coffee" intents depending on the users response. If I was writing code I'd use a switch statement or if/else but that prob doesn't apply here.
I wasn't sure if I had to use the fulfillment inline editor or I could just do that from within the Intent UI.
Thanks
In general - think of Intents as capturing what the user could be saying. Although Intents also have replies, this isn't their primary purpose.
Depending what, exactly, you're trying to do, there are a few approaches. All three of them require fulfillment code, which you can do using the built-in editor, or (better) use a webhook more under your control
If you want to use Intents to determine how to reply
This isn't really the best idea, but it is possible. In your fulfillment code, you would have a switch statement against the parameter with the user's selection. Based on this, you would trigger a followup event from your fulfillment. Your other Intents would have the Event section populated with the possible events, and the system would pick which one to trigger and use for fulfillment/response.
This is a bit of a kludge for what you want, probably.
Update to clarify based on questions in the comments. Sending an event directly triggers a different Intent. Sometimes this is what you want, but it is somewhat exceptional. Most of the time you want to use one of the methods below. In particular, you should remember that Intents are mostly meant to represent what the user is trying to do (what they "intend" to do), and this is mostly represented by what they're saying. Intents are good to capture the complex ways people talk instead of forcing them into a phone-tree-like "conversation".
If you just want to reply to each possible user response differently
You can use the fulfillment webhook code to determine what response should be sent to the user. You don't indicate what library you're using, but in general you'd write code that would determine what message should be sent to the user based on the drink type selected and include that as the speech and/or display text in the response.
You wouldn't use the other, drink specific, Intents in these cases. There isn't any need for them. Unless...
You want to reply to each user response differently, and the followup conversation might be different
Remember - Intents are really best for specifying what you expect the user to say. Not what you expect to reply with. So it is reasonable that you may have a different conversation based on if they selected Coffee (where you might ask how much sugar they want) or Juice (where you might ask if they want a straw).
In this case, you would still do as you have in the previous case (use your fulfillment to include a tailored message in your reply, possibly to prompt them for that info) and include in the reply an Output Context indicating what their choice was. You should do this as part of the response, rather than setting it in the Intent, since you'll want to name it differently for each beverage type.
Then you can create Intents specific to each beverage type with what you expect the user today. For those specific to Coffee, you would set the Input Context to require that the coffee context has been set. The soda context if they specified soda, and so forth.
Update, since you indicated in your comment that this sounded like the avenue you were interested in.
In this scenario, you'd do as you described (almost):
Get the value for the drink parameter with code something like
const drink = request.body.queryResult.parameters.drink;
Do a switch based on this, and in the body of each case set what we'll reply with and what context we should remember. Something like this pseudocode, perhaps:
switch( drink ){
case 'coffee':
context = 'order_coffee';
msg = 'Do you want sugar with that?';
break;
case 'soda':
context = 'order_soda';
msg = 'Do you want a bottle or can?';
break;
case 'juice':
context = 'order_juice';
msg = 'Would you like a straw?';
break;
}
// Format JSON setting the message and context
You would then have Intents that would be triggered based on a combination of two things:
What the context is
What the user has said
For example, you would want a context (let's call it "coffee.black") which would be triggered if the order_coffee context is active and the user answered your question with "No" or "Just black" or other valid combinations.
But you'd want a different context (say, "juice.nostraw") if the order_juice context is active and the user replied "No".
And it wouldn't make much sense at all if the user said "No" while the order_soda context was active, so you'd want to try and direct them back to the subject at hand.
Remember, the Intent is for what the user says. Not for what your voice agent is saying. Your agent doesn't normally "trigger" an Intent - the user triggers it based on what they say.
In the example I gave, there might be other Intents that are valid for each of those contexts. For example, you might have a "coffee.sugar" Intent that is valid for the order_coffee context and responds to them saying "Yes". And another one where they might say "Just cream". There are lots of other things they might say as well, but it is important to your agent that the directions they're giving you have to do with ordering coffee.
As for your original question...
(To answer your original, now edited, question: Yes, you can create Intents from within your fulfillment. You almost certainly don't want to do this, however.)

Webhook generated list fetch option selected by user

I'm pretty new in API.AI and Google Actions. I have a list of items which is generated by a fulfillment. I want to fetch the option selected by user. I've tried reading the documentation but I can't seem to understand it.
https://developers.google.com/actions/assistant/responses#handling_a_selected_item
I also tried setting follow up intents but it wont work. It always ends up giving fallback responses.
I'm trying to search a product or something and the result is displayed using list selector format. I want to fetch the option I selected. This a search_product intent and I have a follow up intent choose_product
You have two options to get information on a Actions on Google list/carousel selection event in API.AI:
Use API.AI's actions_intent_OPTION event
As Prisoner already mentioned, you can create an intent with actions_intent_OPTION. This intent will match queries that include a list/carousel selection as documented here.
Use a webhook
API.AI will pass the list/carousel selection to your webhook which can be retrieved by either:
A) using Google's Action on Google Node.js client library using the app.getContextArgument() method.
B) Use the originalRequest JSON attirbute in the body of the reques to your webhook to retrieve list/carousel selection events. The structure of a list/carousel selection event webhook request will look something like this:
{
"originalRequest": {
"data": {
"inputs": [
{
"rawInputs": [
{
"query": "Today's Word",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "Today's Word",
"name": "OPTION"
}
],
"intent": "actions.intent.OPTION"
}
],
...
This is a sideways answer to your question - but if you're new to Actions, then it may be that you're not really understanding the best approaches to designing your own Actions.
Instead of focusing on the more advanced response types (such as lists), focus instead on the conversation you want to have with your user. Don't try to limit their responses - expand on what you think you can accept. Focus on the basic conversational elements and your basic conversational responses.
Once you have implemented a good conversation, then you can go back and add elements which help that conversation. The list should be a suggestion of what the user can do, not a limit of what they must do.
With conversational interfaces, we must think outside the dialog box.
Include 'actions_intent_OPTION' in the event section of the intent that you are trying to trigger when an item is selected from list/carousel (both work).
Then use this code in the function that you will trigger in your webhook instead of getContextArguments() or getItemSelected():
const param = assistant.getArgument('OPTION');
OR
app.getArgument('OPTION');
depending on what you named your ApiAiApp (i.e.):
let Assistant = require('actions-on-google').ApiAiAssistant;
const assistant = new Assistant({request: req, response: response});
Then, proceed with how it's done in the rest of the example in the documentation for list/carousel helpers. I don't know exactly why this works, but this method apparently retrieves the actions_intent_OPTION parameter from the JSON request.
I think the issue is that responses that are generated by clicking on a list (as opposed to being spoken) end up with an event of actions_intent_OPTION, so API.AI requires you to do one of two things:
Either create an Intent with this Event (and other contexts, if you wish, to help determine which list is being handled) like this:
Or create a Fallback Intent with the specific Context you want (ie - not your Default Fallback Intent).
The latter seems like the best approach since it will also cover voice responses.
(Or do both, I guess.)

Resources