Alexa Skill Kit - To save user input - node.js

Is there a way to save the user input to a variable in ALexa Skill kit.

Yeah You can do that. You can store any information given by user using slots. You can use built in slots or you can also define your custom slots.
You can use built in slots if you want to get or store numbers or date or name of person etc.
Refer this link for list of Built In slots : https://developer.amazon.com/docs/custom-skills/slot-type-reference.html
If you want to store whole statement given by user then you can use AMAZON.SearchQuery:
As you think about what users are likely to ask, consider using a built-in or custom slot type to capture user input that is more predictable, and the AMAZON.SearchQuery slot type to capture less-predictable input that makes up the search query.
Make sure that your skill uses no more than one AMAZON.SearchQuery slot per intent. The Amazon.SearchQuery slot type cannot be combined with another intent slot in sample utterances.

When you are creating an intent using Skill Builder, you can specify the Slot using curly brackets.
You can also define a Slot Type. You can define your own types or choose from built-in types.
Check the complete list here: https://developer.amazon.com/docs/custom-skills/slot-type-reference.html
From your Alexa skill (lambda function) you capture that a user is saying. You can get it from:
this.event.request.intent.slots.<SlotName>.value
Then you do with the value what do you want.
Update:
Interaction schema would be:
{
"name": "MyColorIsIntent",
"samples": [
"my favorite color is {Color}"
],
"slots": [
{
"name": "Color",
"type": "LIST_OF_COLORS"
}
]
},

Related

How to ask the user from for a single word in Alexa Skill

I am trying to develop an Alexa skill where Alexa asks the user for a single word (and than uses the word in sentence).
The user should be able to response with just the word, without any phrase around it. The word can be any word found in the dictionary.
So I am trying to create Intent with an Utterance like this:
{word}
The first question is: What to use for the {word} slot. There is AMAZON.SearchQuery which is for phrases, not for words, but maybe that is good enough.
Unfortunately, when I try to build the model I get:
Sample utterance "{word}" in intent "GetNextWordIntent" must include a carrier phrase. Sample intent utterances with phrase types cannot consist of only slots.
So I really need a phrase around the slot, which is not what I want.
How can I create an Intent (or do it some other way) to ask the user for a single word?
I found this project: https://github.com/rubyrocks/alexa-backwardsword which claims to be a skill, that asks the user for a word and says it backward. Unfortunately the project does not really explain how it deploys itself and how it works in detail.
You can't use AMAZON.SearchQuery intent with a "variable only" as utterance.
You can use it with other slot type.
Why?
Because it will enter in conflict with ALL your intents.
{
"name": "ResponseIntent",
"samples": ["{response}"]
},
{
"name": "QuestionIntent",
"samples": ["play a new question"]
},
When a user wants to invoke other intents, it will work occasionally and route them to the ResponseIntent most of the time. Because the response is a searchQuery and it can be anything.
What if the user just want to quit your skill at the same time?
User: Alexa, stop
Alexa: That's not the correct response
User: Alexa, quit!
Alexa: That's not the right response!
User is frustrated.
It will generate friction. That's why it requires other words.
Creating an Alexa skill requires thinking differently.
It is not a web application, or a voicemail and it can be quite challenging sometime.
There is no button to press to interact with your skill.
A skill is not a one path direction. Yet, the user can do whatever he wants at anytime: ask for help, invoke other intents, quit your skill, ...
What you can do is, based on the context, provide a specific slot type. For example, if you expect the word to be an animal, then you can use a variable only as utterance:
"{animal}",
"the {animal}"
If you use the AMAZON.Animal slot type.
There are plenty of slot type available and you can also extend one or create your own slot type with the values you expect. (or even create a dynamic one)

DialogFlow parameter on all intents without using context/event

we are using DialogFlow for NLP. Our agent has several hundreds of intent. In many of those we need country parameter available (we are retrieving customers' country during specific interactions/dialogs, for some input channels we are retrieving this information from channel directly, e.g. from whatsapp number, etc.). Is there any way how we could propagate country parameter without using events or contexts to all intents? The motivation is obvious: we do not want to create context on all intent manually, some smart solution would be handy here.
What you can do is to always send a detectIntent request with a yourCustomContext context in req.queryParams.contexts with country parameter, as described here. Then, in any intent you'd like, you can access it as #yourCustomContext.country in the parameters.
The context has the following structure, you can use something like the following:
{
"name": "yourCustomContext",
"lifespanCount": number,
"parameters": {
"country": "Zimbabwe"
}
}
The advantage here is that this custom context is easily extendable, in case you need to send additional details all the time, too.
If you use these parameters in a subsequent webhook call and need more complex JSON structure, you can also use req.queryParams.payload object.
Hope that helps!

I want to set entity values based on condition. How to do that in dialogflow?

I am making a bot for booking rooms. For booking rooms a user can choose "Premium Service" or "Standard Service".
However the hotels available to be booked depends on "Premium" or "Standard".
How to do this in dialog flow?
I tried to set entities "Service_type" and "Hotels". However how to set values for entity "Hotels" based on "Service_type" the user has selected?
Please note that the intent of the bot is book rooms. And there are many other steps to be followed to complete it.
You can start by creating an entity like quality and it's helpful to think of other ways that the user might refer to the quality that you define as "standard" and "premium"
Now when you create your intents you should see that Dialogflow automatically detects your entity in the training phrases
If Dialogflow doesn't already detect your entity, you can highlight a word in the training phrase and associate it to a type of your choosing
That's the easy part.
In order to present a different set of hotels depending on which standard that was selected, you should look into developing a fulfillment endpoint that handles the logic.
The quality choice that the user made in the first question will be passed as a parameter and you can easily make conditional logic to select hotels depending on that
conv.ask(`Here is a list of ${quality} hotel options for you`);
if (quality === "premium") {
conv.ask(getPremiumHotelOptions()); // Carousel or list
} else {
conv.ask(getStandardHotelOptions()); // Carousel or list
}
You can create an empty Hotels entity and then populate it with the relevant entity values for that session in your fulfillment webhook.
If you're using node.js for your webhook, you can look into the Dialogflow library to do much of this work. The call might look something like this:
const sessionEntityTypeRequest = {
parent: sessionPath,
sessionEntityType: {
name: sessionEntityTypePath,
entityOverrideMode: entityOverrideMode,
entities: entities,
},
};
const [response] = await sessionEntityTypesClient.createSessionEntityType(
sessionEntityTypeRequest
);
(See a more complete example at https://github.com/googleapis/nodejs-dialogflow/blob/master/samples/resource.js in the createSessionEntityType() function)

Webhook generated list fetch option selected by user

I'm pretty new in API.AI and Google Actions. I have a list of items which is generated by a fulfillment. I want to fetch the option selected by user. I've tried reading the documentation but I can't seem to understand it.
https://developers.google.com/actions/assistant/responses#handling_a_selected_item
I also tried setting follow up intents but it wont work. It always ends up giving fallback responses.
I'm trying to search a product or something and the result is displayed using list selector format. I want to fetch the option I selected. This a search_product intent and I have a follow up intent choose_product
You have two options to get information on a Actions on Google list/carousel selection event in API.AI:
Use API.AI's actions_intent_OPTION event
As Prisoner already mentioned, you can create an intent with actions_intent_OPTION. This intent will match queries that include a list/carousel selection as documented here.
Use a webhook
API.AI will pass the list/carousel selection to your webhook which can be retrieved by either:
A) using Google's Action on Google Node.js client library using the app.getContextArgument() method.
B) Use the originalRequest JSON attirbute in the body of the reques to your webhook to retrieve list/carousel selection events. The structure of a list/carousel selection event webhook request will look something like this:
{
"originalRequest": {
"data": {
"inputs": [
{
"rawInputs": [
{
"query": "Today's Word",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "Today's Word",
"name": "OPTION"
}
],
"intent": "actions.intent.OPTION"
}
],
...
This is a sideways answer to your question - but if you're new to Actions, then it may be that you're not really understanding the best approaches to designing your own Actions.
Instead of focusing on the more advanced response types (such as lists), focus instead on the conversation you want to have with your user. Don't try to limit their responses - expand on what you think you can accept. Focus on the basic conversational elements and your basic conversational responses.
Once you have implemented a good conversation, then you can go back and add elements which help that conversation. The list should be a suggestion of what the user can do, not a limit of what they must do.
With conversational interfaces, we must think outside the dialog box.
Include 'actions_intent_OPTION' in the event section of the intent that you are trying to trigger when an item is selected from list/carousel (both work).
Then use this code in the function that you will trigger in your webhook instead of getContextArguments() or getItemSelected():
const param = assistant.getArgument('OPTION');
OR
app.getArgument('OPTION');
depending on what you named your ApiAiApp (i.e.):
let Assistant = require('actions-on-google').ApiAiAssistant;
const assistant = new Assistant({request: req, response: response});
Then, proceed with how it's done in the rest of the example in the documentation for list/carousel helpers. I don't know exactly why this works, but this method apparently retrieves the actions_intent_OPTION parameter from the JSON request.
I think the issue is that responses that are generated by clicking on a list (as opposed to being spoken) end up with an event of actions_intent_OPTION, so API.AI requires you to do one of two things:
Either create an Intent with this Event (and other contexts, if you wish, to help determine which list is being handled) like this:
Or create a Fallback Intent with the specific Context you want (ie - not your Default Fallback Intent).
The latter seems like the best approach since it will also cover voice responses.
(Or do both, I guess.)

LUIS does not recognize names with spaces

So I got a bot built with Microsoft Bot Framework and it's using the LUIS API for text recognition. With this bot, I'm able to ask about information about different devices that I got in my backend. They got names like Desk, Desk 2 and Phone Booth 4. The first and second name works just fine but whenever I send a name that contains 2 spaces or more, LUIS will fail to recognize it. I have added all the names to a feature list on LUIS but it doesn't seem to do anything. When I'm in the bot code executes the method for that intent, the entity is just null whenever I send this kind of names. Any idea how I might solve this? As I described, names with just one space like Desk 2 works just fine. Maybe there is a way to save multiple words as an entity inside LUIS?
In the image below, the top entry is "show me phone booth 4" and the bottom one "show me desk 2".
It'll take a little leg work, but have you tried updating your model programmatically?
On the LUIS API reference, you can label individual utterances or do it in batches. The benefit of doing it this way is that you can select what should be recognized as an entity based on index position.
Example:
{
"text": "Book me a flight from Cairo to Redmond next Thursday",
"intentName": "BookFlight",
"entityLabels":
[
{
"entityName": "Location::From",
"startCharIndex": 22,
"endCharIndex": 26
},
{
"entityName": "Location::To",
"startCharIndex": 31,
"endCharIndex": 37
}
]
}
I admit I haven't attempted to do this before, but I do not see how labeling/training this way would logically fail.
One thing I do note about your entities is that they're composed of an item and also a number. You could throw them into a composite entity; but in this case doing it the way I mentioned above is a good way to do what you're looking for.
That said, if you plan on using the office-furniture-pieces(?) as entities for a separate intent, say, 'PurchaseNewOfficePieces', it might pay to create use a composite entity for 'Desk 2' and 'Phone Booth 4'.

Resources