Invoke specific Action when Bixby capsule launched - bixby

What would be the equivalent of a LaunchRequest handler in a Bixby capsule. It would be helpful to the user to have a "Welcome" action along with matching a corresponding view which can give a welcome message along with some initial conversation drivers.
action (Welcome) {
type (Search)
description (Provides welcome message to user.)
output (?)
}
What do you need to add to the action so it is matched right after the capsule is invoked? What would the type() of a "Welcome" capsule be?
What should the output be? The action isn't really outputting a concept but rather just prompting the user to involve one of the other actions.

Bixby is not designed to have a generic "Welcome" page when a capsule is launched.
When a user invokes Bixby, they do so with a goal already in mind. If your capsule has been enabled by the user and its purpose matches the user's request, your capsule will be used to fulfill the user's request.
Since your capsule will be only be invoked by a user request for information/procedure (there is no "Hi Bixby, open XYZ capsule"), you would only need to address the use cases you would like to handle.
If you want to provide information regarding your capsule and the types of utterances a user can try, you should define a capsule-info.bxb file and a hints file.
The contents of these files will be shown in the Marketplace where all released capsules are presented to Bixby users to enable at their discretion.
I would recommend reading through the deployment checklist to give you a better idea of all the supporting information and metadata that you can define to help users find and understand the functionality of your capsule.

Most capsules desiring this feature are using "start", "begin" or "open" and the like (your capsule may have something else logical that makes sense). In your training, simply add those with the goal being the action you want to start your capsule.

How Named Dispatch Works
The current en-US dispatch patterns are the following:
"with %dispatch-name% ..."
"in %dispatch-name% ..."
"ask %dispatch-name% ..."
"ask %dispatch-name% for ..."
"ask %dispatch-name% to ..."
The current ko-KR dispatch pattern is the following:
%dispatch-name% 에서 ...
When Bixby is processing an utterance, it uses the above dispatch pattern to identify which capsule to use, then passes the rest of the user's phrase to the capsule for interpretation.
For example, consider if the example.bank had the following code block in its capsule-info.bxb file:
dispatch-name (ACME bank)
If you ask Bixby "Ask ACME bank to open", the "Ask ACME bank" phrase is used to point to the example.bank capsule. The example.bank capsule interprets accordingly its trained in your model with the goal with the word 'open' ,suppose here a welcome greetings.
Check the documentation with "How Named Dispatch Works" which is similar to above description.

Related

How to capture negative response from user in bixby

I am using input-view for selection and i can see none button at the bottom of the screen.I haven't included any conversation-driver yet i can see the button. How to avoid that? Event if we can not avoid it, how can i add the event-listener on this? If user click or say none, I want to give user a custom message and pass it to other intent. is it possible?
Also is it possible to give user other option if none of the utterances matched with the defined one? for example
User: what is the temperature of Oakland?
bixby: today, it is 73 F in san francisco.
User: I want to buy land on mars?
These kind of question is out of context. how to handle it?
Now in this case i want user to be prompt like "It is not possible for me to get the information, but I can tell you weather forecast of your current location. Would you like to know?" User might say yes or no. Yes would redirect it to the weather intent and no will say thank you.
A "None" conversation-driver is shown when the input-view is for a Concept that is optional (min(Optional)) for your Action. Changing it to min(Required) will remove the "None" conversation-driver.
If you want to keep the concept Optional as an input for the Action, you can add a default-init (link to relevant docs) to your Action to kick off another Action that would help the user provide you a correct input.
Bixby cannot create a path for out-of-scope utterances. The idea is that, since every user's personal Bixby will contain a number of capsules, your capsule will get called only if a user's utterance matches the types of utterances you have trained Bixby to recognize via your training file.

Google Assistant - how to re-prompt the user with #sys.any when an input is determined to be invalid

I'm trying to create a custom action through Google Assistant. I have custom user data which is defined by the user and I want the user to ask me something about this data, identifying which data they want to know about by supplying it's name.
ex:
User says "Tell me about Fred"
Assistant replies with "Fred is red"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
The problem I'm having is how to add a Training phrases or re-prompting for the user to use when they supply a name which doesn't exist.
ex:
User says "Tell me about Greg"
Assistant replies with "I couldn't find 'Greg'. Who would you like to know about?"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
I've tried adding a Training response which only contains the 'name' parameter, but then if the user says "Tell me about Fred", the "name" parameter is set to "Tell me about Fred" instead of just "Fred", which means it ignores other Training responses I have setup.
Anyone out there who can be my Obi-wan Kenobi?
Edit:
I've used Alexa for this same project and have sent to Alexa an elicitSlot directive. Can something similar be implemented?
There is no real equivalent to an elicitSlot directive in this case (at least not the way I usually see it used), but it does provide several tools for accomplishing what you're trying to do.
The general approach is that, when sending your reply, you also set an Output Context with the reply. You can set as parameters for the Context any information that you want to retain (what value you're prompting for and possibly other state you've already collected).
Then you can have Intents that have this context set as an Input Context. The Intent will then only be matched if the Context is active. This Intent can match #sys.any, or whatever other Entity type might be appropriate in this case.
One advantage of this approach is that it allows for users to reply more conversationally, or pivot their reply away from the prompting question you've just asked. It allows for users to answer within the Context, or through other Intents that you've already setup for other purposes.

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Reprompt user if no response in google action?

I am trying to make reprompts work for my action built using the dialogflow SDK.
I have an intent 'answer-question' , however I would like a fallback intent to trigger if the user does not reply atall (after a certain unit of time if possible).
I have tried to implement the instructions in this guide: reprompts google action
So I created a custom fallback intent to my answer-question intent, which has an event of actions_intent_NO_INPUT and a context of answer-question-followup
However when testing the intent , it will wait indefinitely for a user response, and never trigger this custom fallback intent.
The "no input" scenario only happens on some devices.
Speakers (such as the Google Home) will generate a no input. You can't control the time it will wait, however.
Mobile devices will not generate a "no input" - it will just turn the microphone off and the user will need to press the microphone icon again to open the mic again.
When testing using the simulator, it will not generate "no input" automatically, but you can generate a "no input" event using the button next to the text input area. Make sure you're in a supported device type (such as the speaker) and press the icon to indicate you're testing a "no input" event.
Finally, make sure your contexts make sense and remember that Intents reflect what a user says or does - not what you're replying with.
Although you've specified an Input Context for the "no input" event, which is good, you didn't specify that you've also set that as an Output Context for the previous Intent. Given your description, it shouldn't be set in 'answer-question' because you're not expecting no-input after the user answers the question, it would be instead of answering the question. So the same Input Context should be set for the Intents where you expect the user to answer the question and the Intent where the user says nothing.

Change default message when assisstant misunderstands user

I have created a google action, which takes in three parameters, I have done training phrases for many word combinations, but sometimes it will not pick it up.
I set my input parameters in the dialog flow to number1, number2, and number3.
It seems by default, if it misses a value it will say: "what is $varName"
however, this could be misleading to users since it may be unclear if it just prompts the user for 'what is number3'.
Id like to edit this response to be a more descriptive message.
I hope this is clear enough - I cant really post any code since its all concerning this dialogflow ui...
cheers!
If you want to add prompt variations for capturing parameters in an entity follow the "adding prompt variation" explained here. Just add variations to prompts as below or handle it from webhook by enabling slot-filling for webhook.
If you want to ask questions when the agent did not understand the intent then you can either use a Default Fallback Intent for a generic reply or create a follow-up fallback intent for the intent you are targetting.
or

Resources