I am developing a capsule and want to test if it works as desired with Named Dispatch.
However, I am unable to trigger the default action in Simulator or as a private capsule on my device.
Instead of the default-action, another action (that does have Training entries) is invoked.
The problem seems to be, that before testing, the capsule has to be set. Therefore, named dispatch is circumvented.
How can Named Dispatch (as described here https://bixbydevelopers.com/dev/docs/reference/type/capsule-info.dispatch-name#how-named-dispatch-works) be tested without publishing to the marketplace?
Current configuration
default-action (MyDefaultAction) is set appropriately in capsule.bxb.
There are no Training entries for the default-action.
I am using the commands from this list: https://bixbydevelopers.com/dev/docs/reference/ref-topics/meta-commands.de-de
You can use one of the seven reserve utterance to test in IDE simulator:
"speak to %dispatch-name%"
"talk to %dispatch-name%”
"start %dispatch-name%"
"load %dispatch-name%"
"ask %dispatch-name%"
"talk with %dispatch-name%”
"use %dispatch-name%"
Remember that it is only valid for en-US target.
You can check out Github example and more details in this KB article.
Related
I created one parameter Named "Purpose" under actions and parameter. I gave it as required parameter and in the prompt I gave as "What is the purpose ?". Entity type I am trying is #sys.any.
After prompt what ever I gave like "Child protective services" or "Child protective service" I am getting reply from simulator that "I missed that can you say that again" or "Sorry I couldn't understand".
This was working two weeks before and suddenly its happening like this in DF. I tried other way also by creating user defined entity and nothing helps.
Any update happened in dialog flow and do I have to change anything to work ?
It’s a bug! Since yesterday Google Assistant is no longer recognizing both Intents and parameters properly. Lots of people are facing that problem.
I already opened a issue and am waiting for a solution.
_DM
What would be the equivalent of a LaunchRequest handler in a Bixby capsule. It would be helpful to the user to have a "Welcome" action along with matching a corresponding view which can give a welcome message along with some initial conversation drivers.
action (Welcome) {
type (Search)
description (Provides welcome message to user.)
output (?)
}
What do you need to add to the action so it is matched right after the capsule is invoked? What would the type() of a "Welcome" capsule be?
What should the output be? The action isn't really outputting a concept but rather just prompting the user to involve one of the other actions.
Bixby is not designed to have a generic "Welcome" page when a capsule is launched.
When a user invokes Bixby, they do so with a goal already in mind. If your capsule has been enabled by the user and its purpose matches the user's request, your capsule will be used to fulfill the user's request.
Since your capsule will be only be invoked by a user request for information/procedure (there is no "Hi Bixby, open XYZ capsule"), you would only need to address the use cases you would like to handle.
If you want to provide information regarding your capsule and the types of utterances a user can try, you should define a capsule-info.bxb file and a hints file.
The contents of these files will be shown in the Marketplace where all released capsules are presented to Bixby users to enable at their discretion.
I would recommend reading through the deployment checklist to give you a better idea of all the supporting information and metadata that you can define to help users find and understand the functionality of your capsule.
Most capsules desiring this feature are using "start", "begin" or "open" and the like (your capsule may have something else logical that makes sense). In your training, simply add those with the goal being the action you want to start your capsule.
How Named Dispatch Works
The current en-US dispatch patterns are the following:
"with %dispatch-name% ..."
"in %dispatch-name% ..."
"ask %dispatch-name% ..."
"ask %dispatch-name% for ..."
"ask %dispatch-name% to ..."
The current ko-KR dispatch pattern is the following:
%dispatch-name% 에서 ...
When Bixby is processing an utterance, it uses the above dispatch pattern to identify which capsule to use, then passes the rest of the user's phrase to the capsule for interpretation.
For example, consider if the example.bank had the following code block in its capsule-info.bxb file:
dispatch-name (ACME bank)
If you ask Bixby "Ask ACME bank to open", the "Ask ACME bank" phrase is used to point to the example.bank capsule. The example.bank capsule interprets accordingly its trained in your model with the goal with the word 'open' ,suppose here a welcome greetings.
Check the documentation with "How Named Dispatch Works" which is similar to above description.
I'm developing an Action, let's call it "foo". It's a grocery list, so users should be able to explicitly invoke it like so:
"ask foo to add milk" (fails)
"ask foo add milk" (works, but grammatically awful)
"tell foo add milk" (fails, even though it's basically identical to the above?)
"talk to foo" ... "add milk" (works, but awkward)
I've defined "add {item} to my foo list" and "add {item}" (as well as many others) as training phrases in Dialogflow. So it seems like everything should be configured correctly.
The explicit invocations "talk to foo" (wait) "add milk" and "ask foo add milk" work fine, but I cannot get any others to work in the Actions simulator or on an actual device. In all cases it returns "Sorry, this action is not available in simulation". When I test in Dialogflow, it works fine.
It seems like the Assistant is trying to match some other unrelated skill (I'm assuming that's what that debug error means). But why would it fail when I explicitly invoke "ask foo to add milk"?
Additionally, my action name is already pretty unique, but even if I change it to something really unique ("buffalo bananas", "painter oscar", whatever) it still doesn't match my action. Which leads me to think that I'm not understanding something, or Actions is just really broken.
Can anyone help me debug this?
Edit: I spent weeks in conversation with the Actions support team, and they determined it was a "problem with my account", but didn't know how to fix it. Unfortunately, at that point they simply punted me to GSuite support, who of course know nothing about Actions and also couldn't help. I'm all out of luck and ideas at this point.
Implicit invocation is not based directly on what training phrases you have. Google will try to match users to the best action for a given query, but it might not.
To get explicit invocation with an invocation phrase, you may need to go back to the Dialogflow integrations section and configure each intent you want to serve as an implicit intent.
I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.
I am trying to make reprompts work for my action built using the dialogflow SDK.
I have an intent 'answer-question' , however I would like a fallback intent to trigger if the user does not reply atall (after a certain unit of time if possible).
I have tried to implement the instructions in this guide: reprompts google action
So I created a custom fallback intent to my answer-question intent, which has an event of actions_intent_NO_INPUT and a context of answer-question-followup
However when testing the intent , it will wait indefinitely for a user response, and never trigger this custom fallback intent.
The "no input" scenario only happens on some devices.
Speakers (such as the Google Home) will generate a no input. You can't control the time it will wait, however.
Mobile devices will not generate a "no input" - it will just turn the microphone off and the user will need to press the microphone icon again to open the mic again.
When testing using the simulator, it will not generate "no input" automatically, but you can generate a "no input" event using the button next to the text input area. Make sure you're in a supported device type (such as the speaker) and press the icon to indicate you're testing a "no input" event.
Finally, make sure your contexts make sense and remember that Intents reflect what a user says or does - not what you're replying with.
Although you've specified an Input Context for the "no input" event, which is good, you didn't specify that you've also set that as an Output Context for the previous Intent. Given your description, it shouldn't be set in 'answer-question' because you're not expecting no-input after the user answers the question, it would be instead of answering the question. So the same Input Context should be set for the Intents where you expect the user to answer the question and the Intent where the user says nothing.