Actions on Google won't respond to explicit invocations - dialogflow-es

I'm developing an Action, let's call it "foo". It's a grocery list, so users should be able to explicitly invoke it like so:
"ask foo to add milk" (fails)
"ask foo add milk" (works, but grammatically awful)
"tell foo add milk" (fails, even though it's basically identical to the above?)
"talk to foo" ... "add milk" (works, but awkward)
I've defined "add {item} to my foo list" and "add {item}" (as well as many others) as training phrases in Dialogflow. So it seems like everything should be configured correctly.
The explicit invocations "talk to foo" (wait) "add milk" and "ask foo add milk" work fine, but I cannot get any others to work in the Actions simulator or on an actual device. In all cases it returns "Sorry, this action is not available in simulation". When I test in Dialogflow, it works fine.
It seems like the Assistant is trying to match some other unrelated skill (I'm assuming that's what that debug error means). But why would it fail when I explicitly invoke "ask foo to add milk"?
Additionally, my action name is already pretty unique, but even if I change it to something really unique ("buffalo bananas", "painter oscar", whatever) it still doesn't match my action. Which leads me to think that I'm not understanding something, or Actions is just really broken.
Can anyone help me debug this?
Edit: I spent weeks in conversation with the Actions support team, and they determined it was a "problem with my account", but didn't know how to fix it. Unfortunately, at that point they simply punted me to GSuite support, who of course know nothing about Actions and also couldn't help. I'm all out of luck and ideas at this point.

Implicit invocation is not based directly on what training phrases you have. Google will try to match users to the best action for a given query, but it might not.
To get explicit invocation with an invocation phrase, you may need to go back to the Dialogflow integrations section and configure each intent you want to serve as an implicit intent.

Related

Actions and parameters are not capturing values properly in dialogflow

I created one parameter Named "Purpose" under actions and parameter. I gave it as required parameter and in the prompt I gave as "What is the purpose ?". Entity type I am trying is #sys.any.
After prompt what ever I gave like "Child protective services" or "Child protective service" I am getting reply from simulator that "I missed that can you say that again" or "Sorry I couldn't understand".
This was working two weeks before and suddenly its happening like this in DF. I tried other way also by creating user defined entity and nothing helps.
Any update happened in dialog flow and do I have to change anything to work ?
It’s a bug! Since yesterday Google Assistant is no longer recognizing both Intents and parameters properly. Lots of people are facing that problem.
I already opened a issue and am waiting for a solution.
_DM

Invoke specific Action when Bixby capsule launched

What would be the equivalent of a LaunchRequest handler in a Bixby capsule. It would be helpful to the user to have a "Welcome" action along with matching a corresponding view which can give a welcome message along with some initial conversation drivers.
action (Welcome) {
type (Search)
description (Provides welcome message to user.)
output (?)
}
What do you need to add to the action so it is matched right after the capsule is invoked? What would the type() of a "Welcome" capsule be?
What should the output be? The action isn't really outputting a concept but rather just prompting the user to involve one of the other actions.
Bixby is not designed to have a generic "Welcome" page when a capsule is launched.
When a user invokes Bixby, they do so with a goal already in mind. If your capsule has been enabled by the user and its purpose matches the user's request, your capsule will be used to fulfill the user's request.
Since your capsule will be only be invoked by a user request for information/procedure (there is no "Hi Bixby, open XYZ capsule"), you would only need to address the use cases you would like to handle.
If you want to provide information regarding your capsule and the types of utterances a user can try, you should define a capsule-info.bxb file and a hints file.
The contents of these files will be shown in the Marketplace where all released capsules are presented to Bixby users to enable at their discretion.
I would recommend reading through the deployment checklist to give you a better idea of all the supporting information and metadata that you can define to help users find and understand the functionality of your capsule.
Most capsules desiring this feature are using "start", "begin" or "open" and the like (your capsule may have something else logical that makes sense). In your training, simply add those with the goal being the action you want to start your capsule.
How Named Dispatch Works
The current en-US dispatch patterns are the following:
"with %dispatch-name% ..."
"in %dispatch-name% ..."
"ask %dispatch-name% ..."
"ask %dispatch-name% for ..."
"ask %dispatch-name% to ..."
The current ko-KR dispatch pattern is the following:
%dispatch-name% 에서 ...
When Bixby is processing an utterance, it uses the above dispatch pattern to identify which capsule to use, then passes the rest of the user's phrase to the capsule for interpretation.
For example, consider if the example.bank had the following code block in its capsule-info.bxb file:
dispatch-name (ACME bank)
If you ask Bixby "Ask ACME bank to open", the "Ask ACME bank" phrase is used to point to the example.bank capsule. The example.bank capsule interprets accordingly its trained in your model with the goal with the word 'open' ,suppose here a welcome greetings.
Check the documentation with "How Named Dispatch Works" which is similar to above description.

Google Assistant - how to re-prompt the user with #sys.any when an input is determined to be invalid

I'm trying to create a custom action through Google Assistant. I have custom user data which is defined by the user and I want the user to ask me something about this data, identifying which data they want to know about by supplying it's name.
ex:
User says "Tell me about Fred"
Assistant replies with "Fred is red"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
The problem I'm having is how to add a Training phrases or re-prompting for the user to use when they supply a name which doesn't exist.
ex:
User says "Tell me about Greg"
Assistant replies with "I couldn't find 'Greg'. Who would you like to know about?"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
I've tried adding a Training response which only contains the 'name' parameter, but then if the user says "Tell me about Fred", the "name" parameter is set to "Tell me about Fred" instead of just "Fred", which means it ignores other Training responses I have setup.
Anyone out there who can be my Obi-wan Kenobi?
Edit:
I've used Alexa for this same project and have sent to Alexa an elicitSlot directive. Can something similar be implemented?
There is no real equivalent to an elicitSlot directive in this case (at least not the way I usually see it used), but it does provide several tools for accomplishing what you're trying to do.
The general approach is that, when sending your reply, you also set an Output Context with the reply. You can set as parameters for the Context any information that you want to retain (what value you're prompting for and possibly other state you've already collected).
Then you can have Intents that have this context set as an Input Context. The Intent will then only be matched if the Context is active. This Intent can match #sys.any, or whatever other Entity type might be appropriate in this case.
One advantage of this approach is that it allows for users to reply more conversationally, or pivot their reply away from the prompting question you've just asked. It allows for users to answer within the Context, or through other Intents that you've already setup for other purposes.

Change default message when assisstant misunderstands user

I have created a google action, which takes in three parameters, I have done training phrases for many word combinations, but sometimes it will not pick it up.
I set my input parameters in the dialog flow to number1, number2, and number3.
It seems by default, if it misses a value it will say: "what is $varName"
however, this could be misleading to users since it may be unclear if it just prompts the user for 'what is number3'.
Id like to edit this response to be a more descriptive message.
I hope this is clear enough - I cant really post any code since its all concerning this dialogflow ui...
cheers!
If you want to add prompt variations for capturing parameters in an entity follow the "adding prompt variation" explained here. Just add variations to prompts as below or handle it from webhook by enabling slot-filling for webhook.
If you want to ask questions when the agent did not understand the intent then you can either use a Default Fallback Intent for a generic reply or create a follow-up fallback intent for the intent you are targetting.
or

Accepting unknown entries like passwords

I am playing a little with api.ai to find out how Google Actions work. I tried something funny like "Initialize self destruction in 5 minutes authorization code 42 pi omega." like in Si-Fi films.
However I'm failing with the basics. I know regarding the time there is a system entity for the time, but what is about the password? I mean I cannot simply create an entity, because it would be stored which would be a stupid idea to store a password as set of possible values.
Yes this is a very basic question, but I didn't find the right resources or key words to find out how this works. If I could enter a regular expression I would check just for the end of the sentence.
In the end I would like to have the entities countdown and authcode, I would like to pass this to a backend which creates then the actual outcome like "The big fireworks will start in 5 minutes" or "You are not authorized to do this".
With API.AI you can use the #sys.any entity type. This is a very rough equivalent of a .* regexp (or .+ if you make it required).
So when defining a phrase, you might enter the sample phrase "Initialize self destruction in 5 minutes authorization code foo bar baz". It would pick up the "5 minutes" part as a #sys.time parameter, and you'd then select the rest and create a new parameter of type #sys.any. When the user spoke, it would fill in the "authcode" part with what they say - it wouldn't try to match "foo bar baz" exactly.
In the end, it might look something like this:

Resources