Handling "cancelling slot filling dialog" - dialogflow-es

I am building a bot which contains the Slot Filling approach and I want to provide a rich message from a webhook once an exit phrase is input to the bot.
I am building a bot which contains the Slot Filling approach. I came across through "cancelling slot filling dialog" in the documentation from the link https://dialogflow.com/docs/concepts/slot-filling#canceling_slot_filling_dialog
While I was trying it out, I found that not only the mentioned utterances in the documentation, there are more exit phrases like that. Ex: nothing, abort.
I couldn't find any intent/settings to configure/change this behaviour.
Is there a way that I could find out all the exit phrases?
Is there a way to change the output message displayed when the user says an exit phrase?
Can we connect with a webhook after user says an exit phrase to provide a custom rich response?
Attached is the response I get when I say an exit phrase to bot while slot filling

There's no built in way to do it as far as I can tell, but there is a hacky way you could use 3 to achieve 2. I'll assume you are familiar with how Dialogflow webhook requests and responses work generally. Please see here if not.
It basically boils down to checking if Dialogflow is about to respond with one of its stock cancellation phrases, then replacing it with one of your own.
Make sure "enable webhook call for slot filling" is on. When the user types a slot filling exit phrase, the webhook JSON that Dialogflow sends will still have the same intent.name property as the intent you're working with. So you can catch that intent in a switch statement.
Then inside that you can simply use an 'if' statement to check the "FulfillmentText" property of the webhook request and see if it's any of the stock phrases Dialogflow uses to respond to cancellations, such as "Sure, cancelling", or "No problem, cancelling". I don't know how many there are, but I assume there's not too many, you'll have to test to try and find them all.
If it is any of those phrases, you can then change what Dialogflow says to the user by giving back a response to the webhook with your own FulfillmentText set to whatever you want (see the link above with how the JSON response should be structured).
This method isn't exactly ideal as the stock exit responses Dialogflow uses could change and it's hard to know if you've found them all, but it should be a workaround until Dialogflow becomes more flexible.
Also copying my comment about question 1 before since it seems to work (thanks for the typo correction):
I suspect the list of cancelling phrases is the same as that found in the "cancel" intent of the prebuilt smalltalk agent. To find this, go to Prebuilt Agents -> Small Talk -> Import. Then navigate to that agent and find the intent "smalltalk.confirmation.cancel" to view the list of phrases.
Hope this helps.

Related

Issues with attempting to enter intents

I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

actions_intent_CANCEL not working as expected

I am trying to follow this great article on Medium written by Jessica Dene. When users say a global cancel command such as "quit", I want my action to respond with a "goodbye" message. I have tried to follow the instructions provided by Jessica as illustrated below:
Add the actions_intent_CANCEL event to my end intent
Know More - no - no is my end intent. As you can see below, when I try to add "actions_intent_CANCEL" under Events, I can't see it as a suggestion in the drop down
But given that actions_intent_CANCEL does exist according to docs, I added it
Error
I saved the intent and tried in the web simulator, I see the below error
Any idea why I am getting this error?
Typing actions_intent_CANCEL in directly was completely appropriate. Most of the ones in the dropdown are for Welcome-like intents rather that in-conversation events that can occur. You have the right action name.
It sounds like you're handling it mostly correctly. The only additional thing you need to do is to explicitly close the conversation.
If you are using a webhook for fulfillment, how you do this depends on the library you're using (assuming you're using a library).
If you're using the actions-on-google library you would use the conv.close() function:
conv.close(`Okay, let's try this again later.`);
With the dialogflow-fulfillment library, it would be agent.end():
agent.end(`Okay, let's try this again later.`);
If you're using multivocal, you can either set the environment setting ShouldClose to true, or set it to true in a Response.
Response: {
"Action.multivocal.welcome": [
{
Template: {
Text: "Hello world."
},
ShouldClose: true
}
]
}
If you are using JSON, you can set payload.data.expectUserResponse to false.
Finally, if you are not using a webhook for fulfillment, but are just using the Responses section of Dialogflow, you would turn "Set this intent as end of conversation" on.
Yes, the actions_intent_CANCEL is removed from the docs and also from the dropdown list of events in Dialogflow.
So for exiting the conversation, you can try the following:--
(1) make an entity entry having all quotes for exiting the conversation e.g:-- bye, goodbye, bbye, talk to you later.
(2) make an intent having examples of the users leaving the conversation e.g:- I have some work, bye for now.
(3) And select the end conversation tap at the bottom of the intent so that conversation ends with the sample response.
(4) Also make a suggestion example for BYE/CANCEL with all the intents for better conversation flow
Using the above steps, you can mimic the actions_intent_CANCEL event

Change default message when assisstant misunderstands user

I have created a google action, which takes in three parameters, I have done training phrases for many word combinations, but sometimes it will not pick it up.
I set my input parameters in the dialog flow to number1, number2, and number3.
It seems by default, if it misses a value it will say: "what is $varName"
however, this could be misleading to users since it may be unclear if it just prompts the user for 'what is number3'.
Id like to edit this response to be a more descriptive message.
I hope this is clear enough - I cant really post any code since its all concerning this dialogflow ui...
cheers!
If you want to add prompt variations for capturing parameters in an entity follow the "adding prompt variation" explained here. Just add variations to prompts as below or handle it from webhook by enabling slot-filling for webhook.
If you want to ask questions when the agent did not understand the intent then you can either use a Default Fallback Intent for a generic reply or create a follow-up fallback intent for the intent you are targetting.
or

Parameter value filling with quick responses in messenger

I have created a bot using Dialogflow (api.ai) and integrated it with Facebook messenger. I want to get the parameter values from user: like city, date (today, tomorrow) by using the quick reply feature of messenger, where user is presented with select-box like options, and can tap on one of the options. The required parameter receives the user-tapped value, saving the user from typing it manually.
I cannot find anywhere in documentation any way to fill up parameter values (slots) using quick replies. There is an option to give quick replies in response section, but the response section is called on fulfilment, and if I take user input in response, then I have to create another follow up intent to process the user-response further, because the current intent gets fulfilled after response.
If I add quick replies in the response section, then I have to create multiple levels of follow-up intents. Ex: I take city input in one intent, and give two options to user (like New York, Delhi). Then I have to create two follow up intents, each for handling one reply (New York and Delhi), and then for each follow up intent, I will have to create more follow up intents to get more parameter inputs. Below is the flow diagram of this case. --->
This can get pretty complex when more levels are added! Amazon Lex has this feature of filling slots using quick replies. Can't I just fill up parameter values directly using the quick replies like Lex?
You don't have to go this far. There is a simple way of using entities & prompts in dialogflow.com. The workflow can be: Weather(intent)->quick reply(New york/Delhi)->City(intent) use entities here->quick reply(Today/Tomorrow)->Use different intents here for today & tomorrow as you will have different responses. You don't need to create different intents unless you have different responses. User says can have different parameters for which you can define different prompts as well. This will again reduce your complexity of creating follow-up intents. Let me know if you need more explanation on this.

Resources