I want the template text to be [item], but the value I'm actually sending is thrown as a different value.
for example
conversation-drivers {
conversation-driver {
template (text : itemName, ClickRealValue : itemVal) <--- I want it
}
}
Any help would be appreciated
Unfortunately, this cannot be done in Bixby (and it is by design).
Tap conversation driver text would result the exact same text (nature language utterance) passed as a continuation.
This makes sense because:
User must know what he/she is selecting.
Conversation driver act as if holding Bixby button and say the utterance.
It is nature to use shorter utterance since longer conversation drivers would be pushed outside the screen. All developer need to do is add training examples to Bixby to support the shorter utterance as continuation.
Related
I matched the concept in selection-of in NL training so that it could now accept the input and display result when i voice-input the option. But the downside of this is it will directly show the result even though i am not in the selection page. Is there any way to prevent this? My approach is match goal of NL with the concept of selection option, with at prompt for the concept
[Update] Would like to show the menu to the user first before they make their selection. The menu can be shown when I running an action
Where playNews> getNews> getMenu (a selection-of input view)
action (getNews) {
type (Constructor)
description (__DESCRIPTION__)
collect {
input (whatuserwant){
type (userWantToHear)
min (Required) max (One)
default-init{
intent{
goal: getMenu
}
}
default-select{
with-rule{
select-first
}
}
}
}
output (newsAudio)
}
To allow voice-input selection, i add training for the concept
So it is able to select selection from the menu, but it also will accept the selection and run even though i am not in the menu yet. Is it possible to get rid of this? Or is this the behaviour of Bixby?
Update: I would remove default-select and add prompt-behavior (AlwaysElicitation) to the input. Read more in our DOC.
You may also want to check this example in GitHub to see how to construct an input view selection from other input of the action. This example is a simplified version of how QuizIt handles the selection part. You may also want to check the training to see how Bixby would take different action with/without the top-level "A" training example.
The input prompt should be easy and simple as you expect: present a list with a message, then user can either tap, or voice select, and able to continue the action.
Here are some additional info you might find useful:
Bixby platform would try to match every property of a struct when an input is missing, so mark the property as visibility (Private) to prevent that. You can also use prompt-behavior (AlwaysSelection) to force a selection of an input.
In case of prompt/continuation training, Bixby would treat it as top-level training is no other training could fit. For example a simple quiz capsule that would construct a default quiz, top level utterance "A" would be treated as if answering the first question as A. To prevent this, just add a training example of "A" and match it to the action you want. Bixby would know to use this top-level training instead of the prompt training.
What would be the equivalent of a LaunchRequest handler in a Bixby capsule. It would be helpful to the user to have a "Welcome" action along with matching a corresponding view which can give a welcome message along with some initial conversation drivers.
action (Welcome) {
type (Search)
description (Provides welcome message to user.)
output (?)
}
What do you need to add to the action so it is matched right after the capsule is invoked? What would the type() of a "Welcome" capsule be?
What should the output be? The action isn't really outputting a concept but rather just prompting the user to involve one of the other actions.
Bixby is not designed to have a generic "Welcome" page when a capsule is launched.
When a user invokes Bixby, they do so with a goal already in mind. If your capsule has been enabled by the user and its purpose matches the user's request, your capsule will be used to fulfill the user's request.
Since your capsule will be only be invoked by a user request for information/procedure (there is no "Hi Bixby, open XYZ capsule"), you would only need to address the use cases you would like to handle.
If you want to provide information regarding your capsule and the types of utterances a user can try, you should define a capsule-info.bxb file and a hints file.
The contents of these files will be shown in the Marketplace where all released capsules are presented to Bixby users to enable at their discretion.
I would recommend reading through the deployment checklist to give you a better idea of all the supporting information and metadata that you can define to help users find and understand the functionality of your capsule.
Most capsules desiring this feature are using "start", "begin" or "open" and the like (your capsule may have something else logical that makes sense). In your training, simply add those with the goal being the action you want to start your capsule.
How Named Dispatch Works
The current en-US dispatch patterns are the following:
"with %dispatch-name% ..."
"in %dispatch-name% ..."
"ask %dispatch-name% ..."
"ask %dispatch-name% for ..."
"ask %dispatch-name% to ..."
The current ko-KR dispatch pattern is the following:
%dispatch-name% 에서 ...
When Bixby is processing an utterance, it uses the above dispatch pattern to identify which capsule to use, then passes the rest of the user's phrase to the capsule for interpretation.
For example, consider if the example.bank had the following code block in its capsule-info.bxb file:
dispatch-name (ACME bank)
If you ask Bixby "Ask ACME bank to open", the "Ask ACME bank" phrase is used to point to the example.bank capsule. The example.bank capsule interprets accordingly its trained in your model with the goal with the word 'open' ,suppose here a welcome greetings.
Check the documentation with "How Named Dispatch Works" which is similar to above description.
I am using input-view for selection and i can see none button at the bottom of the screen.I haven't included any conversation-driver yet i can see the button. How to avoid that? Event if we can not avoid it, how can i add the event-listener on this? If user click or say none, I want to give user a custom message and pass it to other intent. is it possible?
Also is it possible to give user other option if none of the utterances matched with the defined one? for example
User: what is the temperature of Oakland?
bixby: today, it is 73 F in san francisco.
User: I want to buy land on mars?
These kind of question is out of context. how to handle it?
Now in this case i want user to be prompt like "It is not possible for me to get the information, but I can tell you weather forecast of your current location. Would you like to know?" User might say yes or no. Yes would redirect it to the weather intent and no will say thank you.
A "None" conversation-driver is shown when the input-view is for a Concept that is optional (min(Optional)) for your Action. Changing it to min(Required) will remove the "None" conversation-driver.
If you want to keep the concept Optional as an input for the Action, you can add a default-init (link to relevant docs) to your Action to kick off another Action that would help the user provide you a correct input.
Bixby cannot create a path for out-of-scope utterances. The idea is that, since every user's personal Bixby will contain a number of capsules, your capsule will get called only if a user's utterance matches the types of utterances you have trained Bixby to recognize via your training file.
I added suggestion chips to the Dialogflow Chatbot, but I want them to continue with an existing flow based on the button selected. How can I achieve that?
For example: How else can I help you?
Locate store Choose an item About Us
I would like the user to go these flows which already exist.
How can I achieve that?
You would make each of these choices a new Intent, and take action based on that Intent being triggered. Intents represent a specific activity by the user - typically something said or entered, or a selection chip being selected. Suggestion chips are handled just like the person entered what was on the chip.
However, keep in mind that these are just suggestions. Users can enter whatever they want. You need to be prepared for them to take the conversation in a different direction or skip ahead in the conversation. For example, in response to your prompt above, they may try to send feedback, or they may enter something like "find a store near me" and ignore the suggestion chip. You need to account for this when designing your conversations and Intents.
In my dialogflow chatbot i am creating i have a scenario where a user can ask what are the available vacancies you have or they can directly ask i want to join as a project manager or something. Both are in the same intent called "jobs" and the position they want is a required parameter. If user don't mention the position (eg - "what are the available vacancies you have" ) it will list all available vacancies and minimum qualifications need for that vacancy and ask user to pick one (done with slotfilling for webhook.). Now since the intent is waiting for the parameter when user enter the position they like it will provide the details regarding that position. But even when user is trying to ask for something else (trying to call to a another intent or they don't have enough qualifications for that vacancy or the needed job is not listed with the available job list) since that parameter (the Job position) is not provided it ask again and again what is the position you want.
how do i call to another intent when the chatbot is waiting for a required parameter
There is a separate intent for "The job i want is not here". If i typed the exact same one i used to train that intent then i works. but if it is slightly different then it won't work
Try this:
make your parameter as "NOT" required by unchecking the required checkbox.
keep webhook for slot filling.
in the webhook, keep a track if the parameter is provided or not.
if the intent is triggered, check programmatically for parameter and ask the user to provide it by playing with the contexts.
if the user said something else, then there will be no "required" parameter as per Dialogflow and it will not ask repeatedly to provide the parameter.
Let me know if this helped.