I'm building Alexa skills using Node.js in a lambda function and can't find any tutorials on the best way to confirm the data I have in the slots. I got to the point that all slots now have data but would like to have Alexa read back the request and get a confirmation from the user before proceeding. What's the best & proper way to do this?
At first I thought to use an emit with :elicitSlot but then I would need a new slot to do this and it looks very hackish.
for example:
if(all slots have a valid value){
this.emit(':elicitSlot','confirm',"You're request is .... with data .... is this correct?");
}
if(user confirmed data is valid){
// do something
}else{
// the data was not correct get the right data
}
For the whole intent confirmation, check here. For only slot confirmation, check here.
Also, for your followup question,
can the confirmation for the skill and slots be fine tuned for example if one of the slots is something like a name and alexa knows 100% what name I said can it skip the confirmation?
Short answer - of course you can if you do not maintain the dialog. However, it's strongly discouraged to rely on that.
In order to maintain a dialog, you have to monitor dialogState attribute of the intent request, and as long as it's not in state COMPLETED send response with attribute directives as [{'type': 'Dialog.Delegate'}] to keep it flowing. You can maintain finer control of the dialog - consult this doc. Moreover, you are strongly suggested to omit outputSpeech and reprompt in those responses, otherwise Alexa gets upset. Once dialog status is COMPLETED, you get confirmationStatus (for both Intent and slots) - SUCCESS(?)/DENIED/NONE. If the confirmation is not successful. I have seen multiple matches being sent as reply. However, when successful, only the matched slot value is returned.
P.S. I have had this weird issue. When Alexa is asking for confirmation for one slot value, if I deliberately decline twice in a row, it gives up and does nothing! Although, pretty much 99% of the time Alexa was spot on.
P.P.S. Turns out 2 attempts was a hard limitation from Alexa. This is supposed to be improved in next iterations.
Related
I have a custom Alexa Skill similar to some Q&A skill , in which I'm asking the user for a response (say option_1, option_2, option_3), but when the user responds with one of these asked options a different intent (say ruleIntent) is triggered because the option text is somewhat similar to its utterances.
I think it is not a good design if more than one IntentHandler is triggered for same( or similar) phrase, but then I don't know the text of options in advance to avoid this (or what the user is going to speak out as the answer of asked question). What if I can somehow maintain the context of user's response, I think that will be one of the solutions.
Example : -
1.User : Start a Science test {Invokes testIntent }.
2.Alexa : Okay, but before starting do you want to know the rules. Please answer in Yes or No. { response generated from testIntentHandler}
3.User : Yes { invokes many intents }
In line 3 even if I hard-code this to a Intent (say ruleIntent) , then what will happen if some question contains its options as Yes or No. How will I differentiate that and map that to the response of asked question.
One way to deal with this is to track the state using persistent or session attributes.
You can do a check of the state in the canhandle method to route the user to appropriate test intent
One way to solve this could be to use Dialogs. You can use auto delegation for dialogs
Enable auto delegation, either for the entire skill or for specific
intents. In this case, Alexa completes all of the dialog steps based
on your dialog model. Alexa sends your skill a single IntentRequest
when the dialog is complete
Delegate the Dialog to Alexa
I am creating an Alexa skill, but it got rejected by Amazon. The way my skill works is as follows,
User: "alexa, ask doctor is it safe to use vaccine during pregnancy"
Alexa: "gives a response, fetched from DynamoDB"
- (dialogState: Complete)
I got the following review comments from Amazon:
After the skill completes a task, the session remains open with no prompt to the user. The skill must close the session after fulfilling requests if it does not prompt the user for any input.
Can anyone help me with this?
I tried to use DelegateDialog but it doesn't seem to work.
handler_input.response_builder.add_directive(DelegateDirective())
.speak(message)
.ask(reprompt)
.set_card(SimpleCard("Custom", message))
I want Alexa to ask a question to the user, like "Do you have any other question?"
So that the conversation doesn't end and keeps going. I don't want to close the session right after Alexa sends the answer.
A couple of things:
delegate directive is when you want ASK(Alexa Skills Kit) to determine the next thing to speak. This only makes sense if you have a dialog model (requires slots, elicitation prompts, etc.) and the dialog is not yet completed. You do not seem to be using dialog model and at any case you are both delegating and providing speak() which I don't think is what you want.
For your scenario, you will likely want to produce a complete output that has both the answer and the next question. it can be as simple as string-append: message = db_response + ". Anything else?"
In my dialogflow chatbot i am creating i have a scenario where a user can ask what are the available vacancies you have or they can directly ask i want to join as a project manager or something. Both are in the same intent called "jobs" and the position they want is a required parameter. If user don't mention the position (eg - "what are the available vacancies you have" ) it will list all available vacancies and minimum qualifications need for that vacancy and ask user to pick one (done with slotfilling for webhook.). Now since the intent is waiting for the parameter when user enter the position they like it will provide the details regarding that position. But even when user is trying to ask for something else (trying to call to a another intent or they don't have enough qualifications for that vacancy or the needed job is not listed with the available job list) since that parameter (the Job position) is not provided it ask again and again what is the position you want.
how do i call to another intent when the chatbot is waiting for a required parameter
There is a separate intent for "The job i want is not here". If i typed the exact same one i used to train that intent then i works. but if it is slightly different then it won't work
Try this:
make your parameter as "NOT" required by unchecking the required checkbox.
keep webhook for slot filling.
in the webhook, keep a track if the parameter is provided or not.
if the intent is triggered, check programmatically for parameter and ask the user to provide it by playing with the contexts.
if the user said something else, then there will be no "required" parameter as per Dialogflow and it will not ask repeatedly to provide the parameter.
Let me know if this helped.
I am developing a google assistant app on Dialogflow.
And I have a intent that receives two entities: #name and #age
Using the fulfillment throught the inline editor I verify if the #age is below 18.
In that case I need to ask for additional info, I need to ask the name of the person responsible for the child.
I looked around the internet, including the fulfillment samples at https://dialogflow.com/docs/samples
I believe it would look something like this:
let conv = agent.conv();
conv.ask('As your age is under 18 I need the name of the person responsible for you:');
//Some code to retrieve user input into a variable
agent.add(conv);
But I was unable to find how to do it.
Can someone help me to achieve this?
Thanks in advance.
While you are handling an Intent, there is no way to "wait for" the user to respond to your question. Instead, you need to handle user input this way:
You send a response back from your Intent.
The user replies with something they say.
You handle this new user statement through an Intent.
Intents always represent the user taking some action - usually saying something.
So one approach would be to create a new Intent that accepts the user's response. But somehow you need to distinguish this response from the initial Intent that captured the person's name.
One way to do this would be, in the case you ask the question about who the responsible adult is, is to also set a Context. Then you can have a different Intent be triggered only when that Context is set and handle this new Intent to get the name of the adult.
I've something that I don't succeed to understand.
Here the situation I would like to do :
Bot: Hello, what do you want to do ?
User: Search a product
Bot: Which
product are you looking for ?
User: Apple
Bot -> list of products
matched with apple
here is a fragment code :
function searchProduct() {
agent.add('Which product are you looking for ?');
// receive the product answer
//-> then research the matched product in DB
}
const intentMap = new Map();
intentMap.set('I want a product', searchProduct);
agent.handleRequest(intentMap);
In this code, I ask to user the product that he's looking for.
But when he answered "Apple", how can I receive the user response in the same function to continue my process ?
I know there is the "context" concept, but to continue the "search product" process, I need to come back in the function.
For now, I use dialog-fulfillment. And I try to understand this documentation to find the solution :
https://github.com/dialogflow/dialogflow-fulfillment-nodejs/blob/master/docs/WebhookClient.md
The short answer is that you can't (or, at the very least, shouldn't) do it in the "same" function. Each function represents an Intent, or what the user has communicated to us. In the function we need to do the following:
Determine what the user has said that is important to us.
Compute anything based on what they've said.
Send a reply to the user based on (1) and (2).
Once we have sent the reply to the user - that round of the conversation is over. We need to wait for the next Intent to be triggered by the user so we can repeat the above.
Contexts are used so we know which stage of the overall conversation we're in. As part of our reply (step 3 above), we can set a Context which will help Dialogflow determine which Intent should be triggered (and thus which function should be called to process what we know so far). Contexts can also store information about previous turns of the conversation.
Keep in mind that Intents aren't about what we say, but are about what the user says. The reply we send is based on what we need, and then we would use a single Intent to capture each part. The function that handles that Intent would store the answer in the Context and determine the next part of the question.