Accepting unknown entries like passwords - nlp

I am playing a little with api.ai to find out how Google Actions work. I tried something funny like "Initialize self destruction in 5 minutes authorization code 42 pi omega." like in Si-Fi films.
However I'm failing with the basics. I know regarding the time there is a system entity for the time, but what is about the password? I mean I cannot simply create an entity, because it would be stored which would be a stupid idea to store a password as set of possible values.
Yes this is a very basic question, but I didn't find the right resources or key words to find out how this works. If I could enter a regular expression I would check just for the end of the sentence.
In the end I would like to have the entities countdown and authcode, I would like to pass this to a backend which creates then the actual outcome like "The big fireworks will start in 5 minutes" or "You are not authorized to do this".

With API.AI you can use the #sys.any entity type. This is a very rough equivalent of a .* regexp (or .+ if you make it required).
So when defining a phrase, you might enter the sample phrase "Initialize self destruction in 5 minutes authorization code foo bar baz". It would pick up the "5 minutes" part as a #sys.time parameter, and you'd then select the rest and create a new parameter of type #sys.any. When the user spoke, it would fill in the "authcode" part with what they say - it wouldn't try to match "foo bar baz" exactly.
In the end, it might look something like this:

Related

How to handle user names in Dialogflow?

First of all, I'm new to Dialogflow as well as new to coding in general. I'm trying to build a bot that handles subscription pauses.
I have set up some intents and entities for the following steps:
Greet the user and explain what the bot can do
Request a pause for a service subscription (from a pool of ~10
services)
Ask for start time and end time of the pause (two different values)
Sum up the request and repeat the key values
I'm (almost) happy with it but I want to implement a prompt for a username. I don't know if any of the built-in variables can help me here.
That's what I'd like the conversation to look like:
(User): Hi, I would like to pause my subscription for [SUB_NAME] from
[START_DATE] to [END_DATE]
(Assistant): What is your user name for the subscription?
(User): [user_name_123 or UserName123 or USER_NAME] (alphanumeric, not following a certain pattern)
(Assistant): Done. You requested a pause for [SUB_NAME] from [START_DATE] to [END_DATE] for [user_name_123]. Please check your e-mails and confirm your request.
What (I think) I need is a very simple custom variable. In Python I would go for something like this:
user_name = input("What's your user name?")
I'd like to store this as a variable that I can reference with '$'.
Is there any way to do this with Dialogflow?
Also, is it possible to pick up the user name as shown above, i.e. without ML-compatible surrounding sentence structures?
I wouldn't want the conversation to be forcedly repetitive like so:
(Assistant): What's your user name?
(User): My user name is [user_name_123]
If you are using Actions on Google, you can use userStorage to save the username of the user and then later access it to perform tasks ( in your case pausing Subscriptions )
Assuming your intent returns a username, Setting a username in storage is as simple as :
app.intent('ask_username', (conv, params) => {
conv.user.storage.username = params.username; // use $ as conv.user.storage.$ if you want
conv.ask(`Ok, What would I help you with ?.`);
});
Then you can simply access the username as:
conv.user.storage.username
Hope that helps!
You can tag specific words in Dialogflow's training phrases with the type #sys.any, which will be able to grab a part of the input. Then you can grab it as a parameter.
Sys.any is really useful in these types of abstract input types, but will require more training phrases as matching only the username becomes harder.
Instead of using usernames, which don't seem to be authenticated to your service, you may want to look at Google sign-in or OAuth instead. The recommendation above will work, but isn't the best way to do usernames.

Actions on Google won't respond to explicit invocations

I'm developing an Action, let's call it "foo". It's a grocery list, so users should be able to explicitly invoke it like so:
"ask foo to add milk" (fails)
"ask foo add milk" (works, but grammatically awful)
"tell foo add milk" (fails, even though it's basically identical to the above?)
"talk to foo" ... "add milk" (works, but awkward)
I've defined "add {item} to my foo list" and "add {item}" (as well as many others) as training phrases in Dialogflow. So it seems like everything should be configured correctly.
The explicit invocations "talk to foo" (wait) "add milk" and "ask foo add milk" work fine, but I cannot get any others to work in the Actions simulator or on an actual device. In all cases it returns "Sorry, this action is not available in simulation". When I test in Dialogflow, it works fine.
It seems like the Assistant is trying to match some other unrelated skill (I'm assuming that's what that debug error means). But why would it fail when I explicitly invoke "ask foo to add milk"?
Additionally, my action name is already pretty unique, but even if I change it to something really unique ("buffalo bananas", "painter oscar", whatever) it still doesn't match my action. Which leads me to think that I'm not understanding something, or Actions is just really broken.
Can anyone help me debug this?
Edit: I spent weeks in conversation with the Actions support team, and they determined it was a "problem with my account", but didn't know how to fix it. Unfortunately, at that point they simply punted me to GSuite support, who of course know nothing about Actions and also couldn't help. I'm all out of luck and ideas at this point.
Implicit invocation is not based directly on what training phrases you have. Google will try to match users to the best action for a given query, but it might not.
To get explicit invocation with an invocation phrase, you may need to go back to the Dialogflow integrations section and configure each intent you want to serve as an implicit intent.

New to DialogFlow, proper values won't appear when referencing them

I'm working my way through the tutorial and I am pretty sure I'm following it closely but it doesn't seem to be working.
I think I've successfully connected the value with the entity, then referenced said value in the response. But it seems like the entity is not responding.
You don't show the text response, but it seems unlikely this will do what you think it does.
As you've written it, the Intent will match if a user says something like "What is the February 10th"? Which doesn't make much sense.
Specifying a parameter against the sample phrase means that you expect the user to say something that matches that parameter in that place. In this case, you're saying the parameter is of type #sys.date, so you're expecting them to say a date of some sort (there are a variety of possible things that will match).
If you want the user to say "What is the date?" as a phrase, then the "date" part shouldn't be associated with a parameter. You'll then need to fill in some value for the reply - likely through a webhook.

Constructing Date-Periods Using "Since" in Api.ai

I am building a google-assistant application with api.ai that delivers data that has been aggregated over a date-period via a webhook.
It is common for people to ask for date periods using the word "since", for instance:
"What is the data since last monday" (tuesday - now)
or the even trickier:
"What is the data since last year". (ambiguous reference to date-period)
Can api.ai parse these date-periods, or is it necessary to identify if the intent request is of a special "relative" type and then construct the date-period manually?
You will probably want to use something like the #sys.date-period pre-defined entity.
For example, if you create an Intent with a "User says" with parameters such as:
and a response:
and then enter in some queries like:
These might not be exactly what you need, so you may need to craft more of you own. If so, check out the #sys.date pre-defined entity, which may do some of the work for you, and the complete list at https://docs.api.ai/docs/concept-entities#section-date-and-time

Sending specific words to webhook

I'm trying to make an agent that can give me details about movies.
For example, the user says "Tell me about (movie-name)", which sends a post request to my API with the (movie-name) which then returns the response.
However, I don't understand how to grab the movie name from the user's speech without creating a movieName entity with a list of all the movies out there. I just want to grab the next word the user says after "tell me about" and store it as a parameter. How do I go about achieving that?
Yes, you must create a movieName entity, but you do not need to create a list of all movies. Maybe you are experienced with Alexa which requires a list of suggested values, but in api.ai you don't need to do that.
I find that api.ai is not very good at figuring out which words are part of a free-form entity like movieName, but hopefully adding enough user expressions will help it with that.
edit: the entity I was thinking of is '#sys.any' but maybe it would be better to use a list of movie names with the 'automated expansion' feature. I haven't tried that, but it sounds like the way that Alexa's custom slots work, which is actually a lot more flexible (just using the list as a guideline) then people seem to think.

Resources