Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery - dialogflow-es

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"

Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.

Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Related

Issues with attempting to enter intents

I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.

( Alexa ) Is it possible to get the response in same Intent Handler?

I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa

Dialogflow inline editor ask for additional info

I am developing a google assistant app on Dialogflow.
And I have a intent that receives two entities: #name and #age
Using the fulfillment throught the inline editor I verify if the #age is below 18.
In that case I need to ask for additional info, I need to ask the name of the person responsible for the child.
I looked around the internet, including the fulfillment samples at https://dialogflow.com/docs/samples
I believe it would look something like this:
let conv = agent.conv();
conv.ask('As your age is under 18 I need the name of the person responsible for you:');
//Some code to retrieve user input into a variable
agent.add(conv);
But I was unable to find how to do it.
Can someone help me to achieve this?
Thanks in advance.
While you are handling an Intent, there is no way to "wait for" the user to respond to your question. Instead, you need to handle user input this way:
You send a response back from your Intent.
The user replies with something they say.
You handle this new user statement through an Intent.
Intents always represent the user taking some action - usually saying something.
So one approach would be to create a new Intent that accepts the user's response. But somehow you need to distinguish this response from the initial Intent that captured the person's name.
One way to do this would be, in the case you ask the question about who the responsible adult is, is to also set a Context. Then you can have a different Intent be triggered only when that Context is set and handle this new Intent to get the name of the adult.

how do I trigger an intent using DialogFlow?

I have a conversation with dialogflow to select a favourite type of drink and then depending on the category of drink there are follow up questions (ie follow up intents).
Under the intent tab I have the following intents:
Default Welcome Intent
Favourite drink Intent
Coffee Intent
follow up
Soft Drink Intent
follow up
Juice Intent
follow up
I use the training phrase in the Favourite Drink Intent and ask:
"What is your favourite drink?"
And store the response in an entity #drink.
But I don't know how then to trigger the intent "soft drink", "juice" or "coffee" intents depending on the users response. If I was writing code I'd use a switch statement or if/else but that prob doesn't apply here.
I wasn't sure if I had to use the fulfillment inline editor or I could just do that from within the Intent UI.
Thanks
In general - think of Intents as capturing what the user could be saying. Although Intents also have replies, this isn't their primary purpose.
Depending what, exactly, you're trying to do, there are a few approaches. All three of them require fulfillment code, which you can do using the built-in editor, or (better) use a webhook more under your control
If you want to use Intents to determine how to reply
This isn't really the best idea, but it is possible. In your fulfillment code, you would have a switch statement against the parameter with the user's selection. Based on this, you would trigger a followup event from your fulfillment. Your other Intents would have the Event section populated with the possible events, and the system would pick which one to trigger and use for fulfillment/response.
This is a bit of a kludge for what you want, probably.
Update to clarify based on questions in the comments. Sending an event directly triggers a different Intent. Sometimes this is what you want, but it is somewhat exceptional. Most of the time you want to use one of the methods below. In particular, you should remember that Intents are mostly meant to represent what the user is trying to do (what they "intend" to do), and this is mostly represented by what they're saying. Intents are good to capture the complex ways people talk instead of forcing them into a phone-tree-like "conversation".
If you just want to reply to each possible user response differently
You can use the fulfillment webhook code to determine what response should be sent to the user. You don't indicate what library you're using, but in general you'd write code that would determine what message should be sent to the user based on the drink type selected and include that as the speech and/or display text in the response.
You wouldn't use the other, drink specific, Intents in these cases. There isn't any need for them. Unless...
You want to reply to each user response differently, and the followup conversation might be different
Remember - Intents are really best for specifying what you expect the user to say. Not what you expect to reply with. So it is reasonable that you may have a different conversation based on if they selected Coffee (where you might ask how much sugar they want) or Juice (where you might ask if they want a straw).
In this case, you would still do as you have in the previous case (use your fulfillment to include a tailored message in your reply, possibly to prompt them for that info) and include in the reply an Output Context indicating what their choice was. You should do this as part of the response, rather than setting it in the Intent, since you'll want to name it differently for each beverage type.
Then you can create Intents specific to each beverage type with what you expect the user today. For those specific to Coffee, you would set the Input Context to require that the coffee context has been set. The soda context if they specified soda, and so forth.
Update, since you indicated in your comment that this sounded like the avenue you were interested in.
In this scenario, you'd do as you described (almost):
Get the value for the drink parameter with code something like
const drink = request.body.queryResult.parameters.drink;
Do a switch based on this, and in the body of each case set what we'll reply with and what context we should remember. Something like this pseudocode, perhaps:
switch( drink ){
case 'coffee':
context = 'order_coffee';
msg = 'Do you want sugar with that?';
break;
case 'soda':
context = 'order_soda';
msg = 'Do you want a bottle or can?';
break;
case 'juice':
context = 'order_juice';
msg = 'Would you like a straw?';
break;
}
// Format JSON setting the message and context
You would then have Intents that would be triggered based on a combination of two things:
What the context is
What the user has said
For example, you would want a context (let's call it "coffee.black") which would be triggered if the order_coffee context is active and the user answered your question with "No" or "Just black" or other valid combinations.
But you'd want a different context (say, "juice.nostraw") if the order_juice context is active and the user replied "No".
And it wouldn't make much sense at all if the user said "No" while the order_soda context was active, so you'd want to try and direct them back to the subject at hand.
Remember, the Intent is for what the user says. Not for what your voice agent is saying. Your agent doesn't normally "trigger" an Intent - the user triggers it based on what they say.
In the example I gave, there might be other Intents that are valid for each of those contexts. For example, you might have a "coffee.sugar" Intent that is valid for the order_coffee context and responds to them saying "Yes". And another one where they might say "Just cream". There are lots of other things they might say as well, but it is important to your agent that the directions they're giving you have to do with ordering coffee.
As for your original question...
(To answer your original, now edited, question: Yes, you can create Intents from within your fulfillment. You almost certainly don't want to do this, however.)

Parameter value filling with quick responses in messenger

I have created a bot using Dialogflow (api.ai) and integrated it with Facebook messenger. I want to get the parameter values from user: like city, date (today, tomorrow) by using the quick reply feature of messenger, where user is presented with select-box like options, and can tap on one of the options. The required parameter receives the user-tapped value, saving the user from typing it manually.
I cannot find anywhere in documentation any way to fill up parameter values (slots) using quick replies. There is an option to give quick replies in response section, but the response section is called on fulfilment, and if I take user input in response, then I have to create another follow up intent to process the user-response further, because the current intent gets fulfilled after response.
If I add quick replies in the response section, then I have to create multiple levels of follow-up intents. Ex: I take city input in one intent, and give two options to user (like New York, Delhi). Then I have to create two follow up intents, each for handling one reply (New York and Delhi), and then for each follow up intent, I will have to create more follow up intents to get more parameter inputs. Below is the flow diagram of this case. --->
This can get pretty complex when more levels are added! Amazon Lex has this feature of filling slots using quick replies. Can't I just fill up parameter values directly using the quick replies like Lex?
You don't have to go this far. There is a simple way of using entities & prompts in dialogflow.com. The workflow can be: Weather(intent)->quick reply(New york/Delhi)->City(intent) use entities here->quick reply(Today/Tomorrow)->Use different intents here for today & tomorrow as you will have different responses. You don't need to create different intents unless you have different responses. User says can have different parameters for which you can define different prompts as well. This will again reduce your complexity of creating follow-up intents. Let me know if you need more explanation on this.

Resources