I have created an Amazon Lex bot that offers several services:
open a case
check status
status via email
welcome intent
In the first option, we open a case that requires a reason against which we are opening a case.
My bot receives all values as reasons even if the user gives some number it adds the number as a reason.
The problem
The only thing I want to do is that I just want to prevent my bot to accept only integer values. If my bot accepts reasons like "broken laptop" or "internet issue" that is fine.
It sounds like you need to use the AMAZON.AlphaNumeric type as your slot type. Combined with a Lambda function, you can then validate the user's input and respond accordingly based on whether the user inputs a number or a string of text.
Related
I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.
I have a basic lead gen bot under which I have 2 services (in 2 different intents) for which I am collecting leads. Under both of them I am collecting the name, email, and phone number and I also have checked the required tick boxes.
It's working as expected when I am just availing/submitting lead for a single service. However, if in the same interaction I also want to go for the second service the bot is again asking for the name, email & phone number which it already has from my interaction for the first service. How do I make sure that it doesn't ask for the details if it already has them?
I also do not mind handling it programmatically using fulfillment but I could not find any documentation.
Any help is highly appreciated
you can use the user storage (https://developers.google.com/actions/assistant/save-data)
or alternatively you can try to link the parameters of the two intents to the same context parameters. Set your parameter value like this #context_name.param_name
I was able to do this by setting an output context in the first intent & using the input context in the second intent.
The trick was to assign a default value to the parameters in the second intent as "context_name.param"
Currently I'm creating an Action for the Google Assistant.
In this Action, I ask the user to provide its phone number. After this, another intent will repeat the phone number given, and asks if it's correct. If the user responds with 'no', I would like to redirect the user back to the first intent, so it can provide its phone number again. It should be a kind of loop.
(I'm working in a local environment, so only the intents are created within Dialogflow.)
I tried to apply contexts for this case, but in someway it won't succeed.
Thank you guys!
Remember that Intents represent what the user has said, and not what you are doing with that data. So saying that "another intent will repeat the phone number" suggests that you're making some things more complicated.
A better design is likely to have the Intent that collected the data to several things:
Repeat the phone number back
Prompt if this is correct
Set a content indicating you have prompted for confirmation
You can then have another Intent handle the "yes" or "no" statements responding to this prompt. The user may say other things, remember, including giving a correction to the phone number.
See also these articles (based on a StackOverflow question and answer) on designing a conversation and the Dialogflow Intents based on that conversation:
Thinking for Voice: Design conversations, not logic
Conversation to Code (Part 1)
I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.
I have created a bot using Dialogflow (api.ai) and integrated it with Facebook messenger. I want to get the parameter values from user: like city, date (today, tomorrow) by using the quick reply feature of messenger, where user is presented with select-box like options, and can tap on one of the options. The required parameter receives the user-tapped value, saving the user from typing it manually.
I cannot find anywhere in documentation any way to fill up parameter values (slots) using quick replies. There is an option to give quick replies in response section, but the response section is called on fulfilment, and if I take user input in response, then I have to create another follow up intent to process the user-response further, because the current intent gets fulfilled after response.
If I add quick replies in the response section, then I have to create multiple levels of follow-up intents. Ex: I take city input in one intent, and give two options to user (like New York, Delhi). Then I have to create two follow up intents, each for handling one reply (New York and Delhi), and then for each follow up intent, I will have to create more follow up intents to get more parameter inputs. Below is the flow diagram of this case. --->
This can get pretty complex when more levels are added! Amazon Lex has this feature of filling slots using quick replies. Can't I just fill up parameter values directly using the quick replies like Lex?
You don't have to go this far. There is a simple way of using entities & prompts in dialogflow.com. The workflow can be: Weather(intent)->quick reply(New york/Delhi)->City(intent) use entities here->quick reply(Today/Tomorrow)->Use different intents here for today & tomorrow as you will have different responses. You don't need to create different intents unless you have different responses. User says can have different parameters for which you can define different prompts as well. This will again reduce your complexity of creating follow-up intents. Let me know if you need more explanation on this.