How do I set another explicit invocation for my action? - dialogflow-es

I added an intent as an implicit invocation to my action so I could use it as a deep link.
It works just fine if I do this, talk to [display name] [implicit invocation phrase].
Then, I added another sample invocation, speak to [display name] and this works fine as well with my deep link.
Now, so far in my intent the user has to say, "talk to chef and list all chicken recipes" or "speak to chef and list all chicken recipes" but I what I want to add is "ask chef to list all chicken recipes."
In this case, "list all chicken recipes" is the invocation phrase of the intent added as an implicit invocation, chef is the display name of the action.
Unfortunately, the agent returns an empty TTS when I try to use ask chef to list all chicken recipes.
What should I put in the sample invocation section ? talk to chef and speak to chef work fine but ask chef or adding the whole thing : ask chef to list all chicken recipes does not work.

Related

How can Named Dispatch be tested without publishing to marketplace?

I am developing a capsule and want to test if it works as desired with Named Dispatch.
However, I am unable to trigger the default action in Simulator or as a private capsule on my device.
Instead of the default-action, another action (that does have Training entries) is invoked.
The problem seems to be, that before testing, the capsule has to be set. Therefore, named dispatch is circumvented.
How can Named Dispatch (as described here https://bixbydevelopers.com/dev/docs/reference/type/capsule-info.dispatch-name#how-named-dispatch-works) be tested without publishing to the marketplace?
Current configuration
default-action (MyDefaultAction) is set appropriately in capsule.bxb.
There are no Training entries for the default-action.
I am using the commands from this list: https://bixbydevelopers.com/dev/docs/reference/ref-topics/meta-commands.de-de
You can use one of the seven reserve utterance to test in IDE simulator:
"speak to %dispatch-name%"
"talk to %dispatch-name%”
"start %dispatch-name%"
"load %dispatch-name%"
"ask %dispatch-name%"
"talk with %dispatch-name%”
"use %dispatch-name%"
Remember that it is only valid for en-US target.
You can check out Github example and more details in this KB article.

How To Handle Homonyms

I am in the process of creating an agent that will handle call requests via speech. For example, here is what the flow looks like:
1). User says: I need to call John
2). The agent grabs John as the parameter and via fulfillment it queries a database for all the entries that contains John in a certain field. If there is more than one John, a follow up intent is triggered and sends a response asking which John is the desired one:
Agent says: There are several Johns, who do you wish to call? John Test, John Smith, John Pleis or John Schmidt?.
3). The user wants to get in touch with John Pleis.
User says: John Pleis
Here is where I'm having problem. Dialogflow is recognizing John Please, instead of John Pleis. How can I handle this?
Update
Here is how the intent looks:
-- INITIAL INTENT --
-- FOLLOW UP INTENT --
You should be able to address these by using your own Entity Types for the names instead of using the System Entity Type of #sys:any. This lets you specify the possible names that would be accepted and Dialogflow can work with the assistant to better understand what the user might be saying. This isn't perfect, but can improve phrase detection, and can provide you with some tools to help it out to make detection even better.
If your directory is relatively small (a few hundred people, perhaps), you can simply create Developer Entity Types up front for all the names. (There is even an API for managing these Entity Types, so you can automate it.)
If you have too many names, you may want to just create Developer Entity Types for the possible first names (or use the System Entity Type of #sys:given-name if that is suitable enough) and then, as part of your fulfillment webhook, populate a Session Entity Type with the possible names that match.
In either of these cases, you can also use entity aliases to help improve matching. So if you see that "John Please" is still matching, then you can set this up as an alias for "John Piels" and Dialogflow will report this as "John Piels" for that Entity.

Actions on Google won't respond to explicit invocations

I'm developing an Action, let's call it "foo". It's a grocery list, so users should be able to explicitly invoke it like so:
"ask foo to add milk" (fails)
"ask foo add milk" (works, but grammatically awful)
"tell foo add milk" (fails, even though it's basically identical to the above?)
"talk to foo" ... "add milk" (works, but awkward)
I've defined "add {item} to my foo list" and "add {item}" (as well as many others) as training phrases in Dialogflow. So it seems like everything should be configured correctly.
The explicit invocations "talk to foo" (wait) "add milk" and "ask foo add milk" work fine, but I cannot get any others to work in the Actions simulator or on an actual device. In all cases it returns "Sorry, this action is not available in simulation". When I test in Dialogflow, it works fine.
It seems like the Assistant is trying to match some other unrelated skill (I'm assuming that's what that debug error means). But why would it fail when I explicitly invoke "ask foo to add milk"?
Additionally, my action name is already pretty unique, but even if I change it to something really unique ("buffalo bananas", "painter oscar", whatever) it still doesn't match my action. Which leads me to think that I'm not understanding something, or Actions is just really broken.
Can anyone help me debug this?
Edit: I spent weeks in conversation with the Actions support team, and they determined it was a "problem with my account", but didn't know how to fix it. Unfortunately, at that point they simply punted me to GSuite support, who of course know nothing about Actions and also couldn't help. I'm all out of luck and ideas at this point.
Implicit invocation is not based directly on what training phrases you have. Google will try to match users to the best action for a given query, but it might not.
To get explicit invocation with an invocation phrase, you may need to go back to the Dialogflow integrations section and configure each intent you want to serve as an implicit intent.

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Dialogflow required parameters

In my dialogflow chatbot i am creating i have a scenario where a user can ask what are the available vacancies you have or they can directly ask i want to join as a project manager or something. Both are in the same intent called "jobs" and the position they want is a required parameter. If user don't mention the position (eg - "what are the available vacancies you have" ) it will list all available vacancies and minimum qualifications need for that vacancy and ask user to pick one (done with slotfilling for webhook.). Now since the intent is waiting for the parameter when user enter the position they like it will provide the details regarding that position. But even when user is trying to ask for something else (trying to call to a another intent or they don't have enough qualifications for that vacancy or the needed job is not listed with the available job list) since that parameter (the Job position) is not provided it ask again and again what is the position you want.
how do i call to another intent when the chatbot is waiting for a required parameter
There is a separate intent for "The job i want is not here". If i typed the exact same one i used to train that intent then i works. but if it is slightly different then it won't work
Try this:
make your parameter as "NOT" required by unchecking the required checkbox.
keep webhook for slot filling.
in the webhook, keep a track if the parameter is provided or not.
if the intent is triggered, check programmatically for parameter and ask the user to provide it by playing with the contexts.
if the user said something else, then there will be no "required" parameter as per Dialogflow and it will not ask repeatedly to provide the parameter.
Let me know if this helped.

Resources