I matched the concept in selection-of in NL training so that it could now accept the input and display result when i voice-input the option. But the downside of this is it will directly show the result even though i am not in the selection page. Is there any way to prevent this? My approach is match goal of NL with the concept of selection option, with at prompt for the concept
[Update] Would like to show the menu to the user first before they make their selection. The menu can be shown when I running an action
Where playNews> getNews> getMenu (a selection-of input view)
action (getNews) {
type (Constructor)
description (__DESCRIPTION__)
collect {
input (whatuserwant){
type (userWantToHear)
min (Required) max (One)
default-init{
intent{
goal: getMenu
}
}
default-select{
with-rule{
select-first
}
}
}
}
output (newsAudio)
}
To allow voice-input selection, i add training for the concept
So it is able to select selection from the menu, but it also will accept the selection and run even though i am not in the menu yet. Is it possible to get rid of this? Or is this the behaviour of Bixby?
Update: I would remove default-select and add prompt-behavior (AlwaysElicitation) to the input. Read more in our DOC.
You may also want to check this example in GitHub to see how to construct an input view selection from other input of the action. This example is a simplified version of how QuizIt handles the selection part. You may also want to check the training to see how Bixby would take different action with/without the top-level "A" training example.
The input prompt should be easy and simple as you expect: present a list with a message, then user can either tap, or voice select, and able to continue the action.
Here are some additional info you might find useful:
Bixby platform would try to match every property of a struct when an input is missing, so mark the property as visibility (Private) to prevent that. You can also use prompt-behavior (AlwaysSelection) to force a selection of an input.
In case of prompt/continuation training, Bixby would treat it as top-level training is no other training could fit. For example a simple quiz capsule that would construct a default quiz, top level utterance "A" would be treated as if answering the first question as A. To prevent this, just add a training example of "A" and match it to the action you want. Bixby would know to use this top-level training instead of the prompt training.
Related
I want the template text to be [item], but the value I'm actually sending is thrown as a different value.
for example
conversation-drivers {
conversation-driver {
template (text : itemName, ClickRealValue : itemVal) <--- I want it
}
}
Any help would be appreciated
Unfortunately, this cannot be done in Bixby (and it is by design).
Tap conversation driver text would result the exact same text (nature language utterance) passed as a continuation.
This makes sense because:
User must know what he/she is selecting.
Conversation driver act as if holding Bixby button and say the utterance.
It is nature to use shorter utterance since longer conversation drivers would be pushed outside the screen. All developer need to do is add training examples to Bixby to support the shorter utterance as continuation.
I have my code in following structure:
action(app){
//two inputs in this action
1.InvocationName
2.MenuOptionValue //(Action1,Action2,Action3)
//output
Selected Menu Option operation
}
I am new to Bixby , I have following Two Questions:
1.When I give directly only menu option(2nd Input), it prompts me for Invocation name(1st Input) which is trained in NL ,and then I give invocation name and it starts output operation,which it should .But, here I want it to forget the previous Menu Option (2nd input) and prompt me for it again.Is it possible in this structure or suggest if possible in other structure way.
2.MenuOption have 3 options(Action1,Action2,Action3) which should redirect to 3 different operation on input.
Though i am printing in js (endpoint) on different inputs.But how will i perform another following action(User Interaction with Bixby) for those operation,saving previous data.Is it possible by this structure or any suggestions?
For question #1, please give concrete example. I don't understand what you want to do here. I will update my answer when you provide more content. Maybe as answer to question #2, remodeling your capsule to three actions each with different input solves this issue as well. Action would be isolated from each other, so there is no remember or forget issue.
For question #2, if 3 different actions requires different input method, for example, one is integer, and one is string or maybe the third is an integer plus string, you should consider make them different actions and link to separate JS file in endpoints. Then you can treat each action differently by adding different follow-up. Make sure you add training utterances for each of the actions. It is recommended that one action model in Bixby should handle, well, one action only.
I received help on getting Bixby to read the list in the view, but now I am trying to have it be useful for Hands-Free List Navigation. Is there a way use indexing in Spoken-Summary? Currently it just reads each item in the list, but it will be difficult to use ordinal selection without indexing.
Indeed where-each does not have child-key index-var. You can add an index property in the concept structure and add index value in your JS file as a workaround.
However, I would think the speech itself should be sufficient in selecting content. In case of read-one Bixby will pause after each item waiting for "next" or "yes". In case of read-many developer can set page size.
You can (maybe should) implement your capsule that it takes the content rather than number.
For example: "watch news" --> "which of the following channel would you like" --> "BBC, NBC, CNN, FOX", at the input prompt, rather than auto answer machine, "press 1 for BBC, press 2 for NBC... ", user should be able to say "CNN" or "CNN News" and it will match as an input.
In case you really need index, the current possible work around is to add index as part of the structure in your JS function returning the list. It should not be hard. The voice command of "first", "second"... "last" are built-in features and should work.
You can also go to developer center and make a feature request to add index-var to where-each, but it will be a PM decision to approve or when to implement such feature.
I am using input-view for selection and i can see none button at the bottom of the screen.I haven't included any conversation-driver yet i can see the button. How to avoid that? Event if we can not avoid it, how can i add the event-listener on this? If user click or say none, I want to give user a custom message and pass it to other intent. is it possible?
Also is it possible to give user other option if none of the utterances matched with the defined one? for example
User: what is the temperature of Oakland?
bixby: today, it is 73 F in san francisco.
User: I want to buy land on mars?
These kind of question is out of context. how to handle it?
Now in this case i want user to be prompt like "It is not possible for me to get the information, but I can tell you weather forecast of your current location. Would you like to know?" User might say yes or no. Yes would redirect it to the weather intent and no will say thank you.
A "None" conversation-driver is shown when the input-view is for a Concept that is optional (min(Optional)) for your Action. Changing it to min(Required) will remove the "None" conversation-driver.
If you want to keep the concept Optional as an input for the Action, you can add a default-init (link to relevant docs) to your Action to kick off another Action that would help the user provide you a correct input.
Bixby cannot create a path for out-of-scope utterances. The idea is that, since every user's personal Bixby will contain a number of capsules, your capsule will get called only if a user's utterance matches the types of utterances you have trained Bixby to recognize via your training file.
I am trying to understand how to structure intents when entities contains the same strings as value.
I imagine that when adding functionalities this will become a mess;
What is the correct approach to handle this "mixed" content?
example:
Entity 1: content
word document(s)
html page(s)
video(s)
Entity 2: content-specifier
video(s)
image(s)
car(s)
Example 1: show me all [html pages] with [videos]
the expectation is to have
#content => "html page"
#content-specifier => "video"
Example 2: show me all [videos] with [cars]
the expectation is to have
#content => "video"
#content-specifier => "car"
I believe that at the beginning you're going to get a lot of false positive matches. After training and adding a lot of training phrases there it should be ok though. To help it match better the options use also a template:
3 things to note here:
make sure you correct at your intent any values it misclassified
go at the training option frequently and make sure that the entities are correctly recognize. Make any necessary changes
Create a template for the user input. To do so, in your intent at the training phrases, click on the quotes. It will change to an "at" symbol (#). Then add the expected format of your user's input