DialogFlow : 'facebook_welcome' event not triggered - nlp

In my agent on DialogFlow, I created a new intent for linear dialog.
I want to get some user's data when the user clicks on the 'get started' button of facebook messenger.
I added the 'WELCOME' event in my intent. :
My issue is : When user clicks on 'get started', my intent is not triggered but my fallback intent does. (i.e event has not been triggered)
I tried to add that training phrase : 'facebook_welcome' (i think it's a bad idea because an event does not require any user input)
Then, the intent is triggered but the agent does not ask for dialog parameters, it says directly the intent's response like if the slot filling would be done.
What do I misunderstand ? Is there any versioning issue (V1 vs V2) ?
Thanks !
MC

Add this new event and try to invoke the agent again. You need to scroll and search this event in the list.

Think I solved this. What I did was adding MessengerGetStarted to Training phrases on my intent. Looks like this is the query-text that Facebook will send to DialogFlow. The FACEBOOK_WELCOME did not work for me either, but this little "workaround" actually did.
I also disabled ML up in the right corner (the three dots) on my intent, just to be sure.

Related

Issues with attempting to enter intents

I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.

Is there callback available for any RichResponse in Dialogflow

I have a dialogflow requirement to present user a payment link, on the click of which I must hang on 20 seconds (show some busy image or something) and then call on the next Intent.
So far I have been able to present a link using LinkOutSuggestion/BasicCard button. But I do not have idea that how I can make my program proceed further. I know there is a approach to have user input something like "Check Payment", but can we skip this altogether and just pass on to next intent post click of that LinkOutSuggestion or BasicCard Button?
The only way you could skip the part of the user having to input something in the chat is by using a Suggestion. These suggestions cannot be added to a card or open a link, but they do continue the conversation with the text that is used, so you could add a suggestion saying Check Payment.
The linkout suggestion or buttons on card do not support a click event or the possibility to continue the conversation.

( Alexa ) Is it possible to get the response in same Intent Handler?

I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa

Reprompt user if no response in google action?

I am trying to make reprompts work for my action built using the dialogflow SDK.
I have an intent 'answer-question' , however I would like a fallback intent to trigger if the user does not reply atall (after a certain unit of time if possible).
I have tried to implement the instructions in this guide: reprompts google action
So I created a custom fallback intent to my answer-question intent, which has an event of actions_intent_NO_INPUT and a context of answer-question-followup
However when testing the intent , it will wait indefinitely for a user response, and never trigger this custom fallback intent.
The "no input" scenario only happens on some devices.
Speakers (such as the Google Home) will generate a no input. You can't control the time it will wait, however.
Mobile devices will not generate a "no input" - it will just turn the microphone off and the user will need to press the microphone icon again to open the mic again.
When testing using the simulator, it will not generate "no input" automatically, but you can generate a "no input" event using the button next to the text input area. Make sure you're in a supported device type (such as the speaker) and press the icon to indicate you're testing a "no input" event.
Finally, make sure your contexts make sense and remember that Intents reflect what a user says or does - not what you're replying with.
Although you've specified an Input Context for the "no input" event, which is good, you didn't specify that you've also set that as an Output Context for the previous Intent. Given your description, it shouldn't be set in 'answer-question' because you're not expecting no-input after the user answers the question, it would be instead of answering the question. So the same Input Context should be set for the Intents where you expect the user to answer the question and the Intent where the user says nothing.

actions_intent_CANCEL not working as expected

I am trying to follow this great article on Medium written by Jessica Dene. When users say a global cancel command such as "quit", I want my action to respond with a "goodbye" message. I have tried to follow the instructions provided by Jessica as illustrated below:
Add the actions_intent_CANCEL event to my end intent
Know More - no - no is my end intent. As you can see below, when I try to add "actions_intent_CANCEL" under Events, I can't see it as a suggestion in the drop down
But given that actions_intent_CANCEL does exist according to docs, I added it
Error
I saved the intent and tried in the web simulator, I see the below error
Any idea why I am getting this error?
Typing actions_intent_CANCEL in directly was completely appropriate. Most of the ones in the dropdown are for Welcome-like intents rather that in-conversation events that can occur. You have the right action name.
It sounds like you're handling it mostly correctly. The only additional thing you need to do is to explicitly close the conversation.
If you are using a webhook for fulfillment, how you do this depends on the library you're using (assuming you're using a library).
If you're using the actions-on-google library you would use the conv.close() function:
conv.close(`Okay, let's try this again later.`);
With the dialogflow-fulfillment library, it would be agent.end():
agent.end(`Okay, let's try this again later.`);
If you're using multivocal, you can either set the environment setting ShouldClose to true, or set it to true in a Response.
Response: {
"Action.multivocal.welcome": [
{
Template: {
Text: "Hello world."
},
ShouldClose: true
}
]
}
If you are using JSON, you can set payload.data.expectUserResponse to false.
Finally, if you are not using a webhook for fulfillment, but are just using the Responses section of Dialogflow, you would turn "Set this intent as end of conversation" on.
Yes, the actions_intent_CANCEL is removed from the docs and also from the dropdown list of events in Dialogflow.
So for exiting the conversation, you can try the following:--
(1) make an entity entry having all quotes for exiting the conversation e.g:-- bye, goodbye, bbye, talk to you later.
(2) make an intent having examples of the users leaving the conversation e.g:- I have some work, bye for now.
(3) And select the end conversation tap at the bottom of the intent so that conversation ends with the sample response.
(4) Also make a suggestion example for BYE/CANCEL with all the intents for better conversation flow
Using the above steps, you can mimic the actions_intent_CANCEL event

Resources