I am building a google-assistant application with api.ai that delivers data that has been aggregated over a date-period via a webhook.
It is common for people to ask for date periods using the word "since", for instance:
"What is the data since last monday" (tuesday - now)
or the even trickier:
"What is the data since last year". (ambiguous reference to date-period)
Can api.ai parse these date-periods, or is it necessary to identify if the intent request is of a special "relative" type and then construct the date-period manually?
You will probably want to use something like the #sys.date-period pre-defined entity.
For example, if you create an Intent with a "User says" with parameters such as:
and a response:
and then enter in some queries like:
These might not be exactly what you need, so you may need to craft more of you own. If so, check out the #sys.date pre-defined entity, which may do some of the work for you, and the complete list at https://docs.api.ai/docs/concept-entities#section-date-and-time
Related
I want to be able to handle an intent where the user provides either a date period or a date in their phrase.
E.g., "What's my scheduled today" or "What's my schedule this week".
Using #sys.date-period serves an empty parameter to my webhook when "today" is used, and using #sys.date serves an empty parameter when "this week" is used.
Is there a way to accomplish this other than using #sys.any and deciphering the type of parameter myself in my fulfillment?
Use #sys.date-time entity to get the date, I'm using it in my bot and it's working fine with any of the cases that you've mentioned.
It easily identifies today, tomorrow or general date format (i.e. 21st Jan).
For more information see this documentation
I am writing you to ask a question about Dialogflow fulfillments.
I am trying to create an agent for Google Home and my backend is basically a web hook implemented in TypeScript.
In the conversation that I designed, the user requests to the agent to perform an action, providing a category as paramter. Now, the set of possible categories can vary through time, so I am using the entity type #sys.any to detect the parameter.
My problem is that, when on the fulfillment I try to identify the specific category on which the agent needs to take action, it may be the case that the requested paramter matches multiple cateogries, so I'd need a followup intent to ask the user to clarify which is the actual category it wants to select.
E.g. the conversation could be the following:
Agent: 'Welcome.'
User: 'Do action on **category**'
Agent: 'I have found **categoryA**, **categoryB** and **categoryC**. Please specify which one you want to select.'
User: 'Select the second || Select **categoryB**'
Agent: 'Great, action performed on **categoryB**'
Now, I was able to build this conversation using followup events and contexts: for example I created two followup events, one that detects the numbers and another that detects the text, so the user is driven on one or another depending on what it says (if the user says 'The first', a number is detected and in the backend I cycle the categories selecting the one that is associated to that index. I do a similar operation if the user says "categoryX", but inside a different intent).
What I want to understand is: what is the proper way to achieve that kind of conversation through the Node.js fulfillment API?
Thank you for any help.
From your description - you've done precisely the right thing (although you don't need followup intents).
When you reply with the options the user has, you include a Context that may contain the array of possible results. You then create Intents that have this as an Input Context, match either the index of the array (lets call this the match.index Intent) or by name (the match.name Intent).
In your webhook, the match.index Intent would determine which category was actually chosen, and then call a function that takes care of that category. Similarly, the webhook for match.name would take the parameter with the name and call the same function to take care of that category.
I'm working my way through the tutorial and I am pretty sure I'm following it closely but it doesn't seem to be working.
I think I've successfully connected the value with the entity, then referenced said value in the response. But it seems like the entity is not responding.
You don't show the text response, but it seems unlikely this will do what you think it does.
As you've written it, the Intent will match if a user says something like "What is the February 10th"? Which doesn't make much sense.
Specifying a parameter against the sample phrase means that you expect the user to say something that matches that parameter in that place. In this case, you're saying the parameter is of type #sys.date, so you're expecting them to say a date of some sort (there are a variety of possible things that will match).
If you want the user to say "What is the date?" as a phrase, then the "date" part shouldn't be associated with a parameter. You'll then need to fill in some value for the reply - likely through a webhook.
i want to download or atleast view all api.ai system entities
the purpose is to understand how they made entity like sys.number and sys.date
the problem i'm facing is that i'm using sys.date entity for my bot whihc works very fine for casual cases like "today" "tomorrow" detected as current date
but fails in special cases like: i want to add one more thing that when user say "aaj", "foran" or "abhi" so it also detect as current date, these are slang words for "today" used in a specific region
All API.AI system entities are listed here: https://api.ai/docs/reference/system-entities The values of those entities (which seems to be what you're asking for) are too large to be published (i.e. all cities).
If you wish to add additional entity values I'd recommend creating additional entities (like today or tomorrow) with the values you believe should be included (like aaj, foran or abhi) and handle them either in your webhook or with custom responses specifically using those entities in your response in API.AI.
If you haven't already you may want to check if API.AI supports the language you're trying to implement. You can check the language of your agent in your API.AI agent's settings (if is not the right language you can select the language you want when creating a new API.AI agent, a list of support languages is here: https://api.ai/docs/reference/language)
I'm trying to make an agent that can give me details about movies.
For example, the user says "Tell me about (movie-name)", which sends a post request to my API with the (movie-name) which then returns the response.
However, I don't understand how to grab the movie name from the user's speech without creating a movieName entity with a list of all the movies out there. I just want to grab the next word the user says after "tell me about" and store it as a parameter. How do I go about achieving that?
Yes, you must create a movieName entity, but you do not need to create a list of all movies. Maybe you are experienced with Alexa which requires a list of suggested values, but in api.ai you don't need to do that.
I find that api.ai is not very good at figuring out which words are part of a free-form entity like movieName, but hopefully adding enough user expressions will help it with that.
edit: the entity I was thinking of is '#sys.any' but maybe it would be better to use a list of movie names with the 'automated expansion' feature. I haven't tried that, but it sounds like the way that Alexa's custom slots work, which is actually a lot more flexible (just using the list as a guideline) then people seem to think.