How does Bixby retain data from a previous NL input? - bixby

I don't understand the way Bixby retains data from previous NL Input. The following example uses the capsule capsule-sample-shirt.
I first use the NL input find 2 medium shirts to get a list of shirts.
I click on any of those (here i use the Collar Dress Shirt), and Bixby asks if I would want to buy it.
I click No and Bixby informs me with a Okay, I won't do that.
I now immediately run again the same NL input find 2 medium shirts and expect Bixby again to present me with the list of shirts, just like the first time. Instead of the expected list of shirts now, Bixby asks me again Are you sure you want to buy this? with the Collar Dress Shirt that I previously selected.
Why is Bixby not showing the list of shirts the second time find 2 medium shirts is given as NL input? What would need to happen to make Bixby show the list with this NL input after the first time?
List of shirts after NL Input:
Bixby waits for confirmation:
Bixby saying it canceled the prompt:
Bixby doesn't show the list but instead immediately asks for confirmation:

This is the AI part about Bixby.
Each conversation (utterance) without pressing the reset key is considered a continuation of last utterance (if any). Thus make the choice of Collar Dress and cancel it, but later ask to find 2 medium shirts again, Bixby will try to fill-in the blank with last choice user made.
One clear issue is that now has no way to change the type of shirt unless reset, but fix would be easy, make the image of shirt clickable and link actions in view model Confirmation.view.bxb
image-card {
aspect-ratio (4:3)
image-url ("[#{value(item.shirt.images[0].url)}]")
title-area {
halign (Start)
slot1 {
text {
value ("")
style (Title_M)
}
}
}
// Add on-click here
}
You can add on-click similar to change the size and quantity
input-cell {
label ("Quantity")
value ("#{value(item.quantity)}")
on-click {
//This intent relies on searchTerm matching the item which is not a good practice, a better approach
//was to allow an ID as input to SelectItem and use `this.id` in the intent
intent {
goal {
UpdateOrder
#context (Continuation) { Order }
}
value { SearchTerm$expr(item.shirt.title) }
route { GetQuantity }
}
}
}
You may need to add other models to properly promote user.
Hope this will help, and have fun with Bixby!

Related

Directline choice prompt not displaying correctly

Hi we have a chatbot which is developed using bot framework and integrated in Webchat. In this choice prompt display is not correct. At some time it will display as buttons sometimes not. What may be the issue?
This is by design defaulting to ListStyle.auto as can be seen in the ChoicePrompt class here. The ChoicePrompt class extends the Prompt class which, if no prompt style (inline, list, suggest action, hero card, or none) is supplied, then it defaults to calling ChoiceFactory.forChannel(). This method runs an algorithm that checks a variety of factors to determine the best style for the given channel.
The forChannel() method checks, among other things, the number of choices included and the length of each choice title. If the title's length is too long, limited to 20 characters (ref here), and the number of choices is over 3 (ref here), then default to a list.
This is what is happening to you. However, you can overwrite this by simply passing in the style property in the prompt, like so:
async choiceStep(stepContext) {
const choices = ['Hello', 'No soup for you!', 'Execute Order 66', 'You shall not pass!', 'Make it so, number 1', "You can't handle the truth!"]; // , `${ Number(66) }`];
return await stepContext.prompt(CHOICE_DIALOG_SUB_PROMPT, {
prompt: "Choose and option ,eh?",
choices: ChoiceFactory.toChoices(choices),
style: ListStyle.suggestedAction
});
}

Hands free navigation for input-view form element

I am using form element number input to enter a quantity and used the submit button ("Go") to go to the next page when the submit button is clicked.
But on giving voice command "Go", the page doesn't go to the next page. why?
Any solution to this. Please.
input-view {
match {
Quantity (Quantity) {
to-input: TicInfo
}
}
message {
template ("Enter Quantity")
}
render {
form {
elements {
number-input {
id (Quantity)
type (Quantity)
label (Quantity)
}
}
on-submit {
goal: TicInfo
value: viv.core.FormElement(Quantity)
}
submit-button (Go)
}
}
}
Go is the label text of submit button and not part of voice command in both HEF and non-HEF mode. In both case, proper training examples should be added.
For hands free mode on mobile, should be triggered by "hi Bixby" and not touch screen any time during the conversation. "hi, Bixby" --> "ask my capsule to do input prompt" --> Bixby respond "Enter Quantity" and opens mic waiting for input --> "five" --> continue the flow as 5 is taken as input.
For normal non-HEF mode, hold button and "ask my capsule to do input prompt" --> Bixby respond "Enter Quantity" --> user can either type 5 and tap Go or hold button again and say "5" then release the button --> continue the flow as 5 is taken as input.
Customized input in HEF and non-HEF can be supported with proper training examples, developer can support utterance like "go ahead with 5" or "quantity is 5" or "I want 5". It is possible to training "go" with a default value to the input prompt, however, it is not possible to fill the form by entering text, and training "go" utterance to take the input value.
The training example should be marked as "at prompt for [input type]", please read more in https://bixbydevelopers.com/dev/docs/dev-guide/developers/training.training-for-nl

Issue of getting more items in List(Actions on google)

I am developing a shopping bot in that user will ask for the product and then i will be fetching the results from the database and the results will be more than 10 items. I know that the default items for the list is 10 items. My question here is how to add a more button at the end of the list so that i can load more of the items into the list.
for(var p=0;p<=countforchunk;p++)
{
items[p] = {
optionInfo: {
key: (p + 1).toString(),
synonyms: temparray[p],
},
title: temparray[p],
url: "https://www.google.com/imgres?imgurl=https%3A%2F%2Fcdn.pixabay.com%2Fphoto%2F2015%2F04%2F23%2F22%2F00%2Ftree-736885__340.jpg&imgrefurl=https%3A%2F%2Fpixabay.com%2Fimages%2Fsearch%2Fnature%2F&tbnid=_2JirDBiGzi3lM&vet=12ahUKEwi71YPNxdrnAhVJGbcAHVi_BdEQMygAegUIARCFAg..i&docid=Ba_eiczVaD9-zM&w=546&h=340&q=images&ved=2ahUKEwi71YPNxdrnAhVJGbcAHVi_BdEQMygAegUIARCFAg",
image: new Image({
url: imgarray1[p],
alt: imgarray1[p]
}),
}
conv.ask(new List({
title: 'Search Results',
items: items
}));
resolve();
}
Please help me out,
Thanks.
As far as I can tell - there is no technical limit of 10 items. If you put 12 items in a list, for example, it will show 12 items.
This is not, however, a very good idea. (Even 10 items is a lot, and you should be thinking about voice interaction, where you might not want to read back more than 2 or 3). So at some point you will want to think about paging anyway.
If you do, you need to implement this as another Intent and Intent Handler. You can do this by offering a suggestion chip that says "Show me more" and accepting training phrases such as "more", "what else", and "show me more" in the Intent. You can use a Context to keep track of where you are in the result list.
You have to keep track of loaded item. There is limitation of loading 30 items at a time.
When user wants more item, you have handle that voice intent and can store current page index in context and based on that you can add another 30 items by replacing existing one.
1-30 items = page 1
30-60 items = page 2 and so on.
Call an api accordingly.

Capturing Street Address Through Voice for Bixby

I am writing a Bixby capsule and one of the inputs is street address.
One method that I have tried is creating the following structure:
structure (FullAddress) {
description (Address of a house)
property (addressNumber) {
type (geo.StreetNumber)
min (Required)
description (Address Number)
}
property (addressStreet) {
type (geo.StreetName)
min (Required)
description (Street Name)
}
property (addressSuffix) {
type (geo.StreetSuffix)
min (Required)
description (Street Name)
}
}
with a constructor action to put the 3 inputs together.
I have seen that given an address 19 Fake Fields Street the geo.StreetName typed input sometimes is able to understand Fake Fields and sometimes just Fake and drops Fields.
Also Bixby's speech to text sometimes hears app or have instead of ave for the geo.StreetSuffix value which makes it prompt the user for a suffix.
Is there a way to get Bixby to understand a street address with a little more accuracy?
Basically you need more training examples, which include 2 or 3 words as street names. Try to have at least 3 examples with xxx fakexxx fields street, and test in simulator the utterance yyy fakeyyy fields street to see if Bixby can capture fields as part of the address name. The goal here is to train Bixby to learn that there might be 2 or even 3 words ahead of addressSuffix. After that you can try utterance zzz fakezzz creek street without ever using creek in the training to confirm Bixby not just learned fields. Please read more in this article.
There is no easy way when come to speech recognition. You can include a vocab model to force "app" to be "ave", but what if user truly want say the word app or have? I would think the user can type ave or blvd, but need to say the word avenue instead of ave, and boulevard instead of blvd.
Another alternative is to use the viv.geo.SearchTerm in training and viv.geo.NamedPoint in your action. This let a user say something incomplete like "1 Market Street, California" and Bixby will use a HERE maps search to find this in San Francisco.
To use, setup a NamedPoint concept (after importing viv.geo)
structure (InputAddress) {
role-of (geo.NamedPoint)
}
Then in your action, you can do something like:
input (namedPoint) {
type (InputAddress)
min (Required) max (One)
default-select {
with-learning
with-rule {
select-first
}
}
}
In this example, using learning and select-first will automatically select the first address. Without this, Bixby will autosuggest addresses.
namedPoint will then be passed to your endpoint and you can parse as needed.
In training, use geo.SearchTerm - for example:
[g:GetAddressAction] My address is {[g:InputAddress] (665 Clyde Ave Mountain View California)[v:geo.SearchTerm]}
or for a prompt, you could use:
[g:GetAddressAction:continue:InputAddress] {[g:InputAddress] (60 S Market)[v:geo.SearchTerm]}
You can get a more fully formatted address by letting Bixby handle it by using the viv.geo.ResolveAddressByPlaceID goal. Here is a complete action using NamedPoint and ResolveAddressByPlaceID. Note the links to the relevant docs in comments
action (GetAddressAction) {
type(Search)
description (Get Address)
collect {
// See https://bixbydevelopers.com/dev/docs/dev-guide/developers/library.geo#using-searchterm - used in training
// and https://bixbydevelopers.com/dev/docs/dev-guide/developers/library.geo#namedpoint - used below and for computed-input
input (namedPoint) {
type (InputAddress)
min (Required) max (One)
default-select {
with-learning
with-rule {
select-first
}
}
// hidden - Hide if all you need is address
}
computed-input (address){
type (geo.Address)
min (Optional) max (One)
compute {
intent {
goal: viv.geo.ResolveAddressByPlaceID
value: $expr(namedPoint.placeID)
}
}
}
}
output (geo.Address)
}

Show more possibility in cell area in Bixby

The user says: "Show me Chinese menu"
I have used cell area to show food items because this is the layout I wanted in my card (vertical and small image).
However, because I have more than 80 items to list down, is it possible that first I show the user ten items and then a "Show More..." button?
If the user clicks on the "Show More Button", either I should open the menu in a different result-view or list down the rest menu in the same page hiding the "Show More..." button.
I know image-list has the option but the image is little big compare to cell-area image and showing it horizontally, where I wanted it vertically.
Scenario 2:
User: where is Pizzahut in my area
in my detail page, bixby will show one store with image, below image will be the location with map, below that will be the list of menu. So three blocks into the results. first block is compound-card, second block is map-card and third block is cell-area. Now third block which is the menu has more than 20 items and i am listing it down. I want it to show like 5 items and one show more link and as soon as user click one the show more link, rest of the menu will drop there or I can redirect it to one new page with list of all the menu... whatever is possible.
For a large set of results, the best way would be for you to use navigation-mode.
navigation-mode {
read-many-and-next {
underflow-statement (This is the first page of results)
list-summary ("I have #{size(this)} results")
overflow-statement (That's all I have)
overflow-question (What would you like to do?)
next-page-question (Do you want the next page?)
page-size (10)
}
}
This ensures that you give the user a paginated view of your results so the user can consume them piecemeal rather than getting overwhelmed with 2 sets of results (first 10 and then the next 70).
Adding information to account for modified question
The highlights functionality would've worked in a simpler view but for your more complex view, I would recommend manually adding 5 iterations of menu items to display the first 5 items followed by a card with an on-click that pulls the whole menu and displays it in a separate result-view. I've added a code sample below with some placeholder concepts and actions.
Code sample:
single-line {
text {
style (Title_S)
value ("#{value(menu[0])}")
}
}
divider
single-line {
text {
style (Title_S)
value ("#{value(restaurant.menu[1])}")
}
}
divider
single-line {
text {
style (Title_S)
value ("#{value(restaurant.menu[2])}")
}
}
divider
single-line {
text {
style (Title_S)
value ("#{value(restaurant.menu[3])}")
}
}
divider
single-line {
text {
style (Title_S)
value ("#{value(restaurant.menu[4])}")
}
}
divider
cell-card {
slot2 {
content {
order (PrimarySecondary)
primary {
template ("See Full Menu")
}
}
}
on-click {
intent {
goal:GetFullMenu
value-set: RestaurantName ("#{value(restaurant.name)}")
}
}
}

Resources