I submitted my capsule as a private submission and it success, and i went on try on an real device.
But when i use up all my hints noted in the capsule, and also tried the command in 'capsule interpreter training summary' and NL training, the capsule is not loaded. Bixby response with "I couldn't understand that" And i am not seeing it under "my capsule" too, what command should i use other from those to activate the capsule?
On capsule side,I had set up dispatch name, hints, target (bixby-mobile-en-US)
On phone side, bixby ver. on phone is 2.+, samsung account is in dev team.
From the docs it said, "To see your previous submissions, you can also check within Teams & Capsules". Does it refer to public or private submission or both? Cuz I couldn't see my private submission on there. But there is record in my submission history in IDE. Do i missed any step in submitting and testing my private submission?
There's a few points in your post so I'll answer them in a nested list:
Bixby response with "I couldn't understand that"
This is most likely due to inadequate training. You should add more training in the training file as it will help Bixby understand the types of utterances you expect to handle. Additionally, It is a good idea to train your hint utterances as that will drastically increase the chance that Bixby will be able to handle those particular utterances.
And i am not seeing it under "my capsule" too, what command should i use other from those to activate the capsule?
You do not need to use any other command. When you load a private (or public) revision in the on-device testing section of your device, the revision is loaded and the only capsule available will be the one contained in that revision. You will not see this capsule in the "My Capsules" section because the "My Capsules" section contains the capsules that have been enabled via the Marketplace. A private revision is basically a development "Marketplace" that contains only your capsule in order for you to test its functionality.
"To see your previous submissions, you can also check within Teams & Capsules". Does it refer to public or private submission or both?
This refers only to public submissions. Private submissions do not show up on the Developer Center in the "Teams & Capsules" section. You can view your private and public submissions in the submissions section of Bixby Studio itself.
Related
So, I was thinking of making a capsule for launching a specific app in the phone. Now apps can be called by their commonly spoken names. For eg. Instagram can be called "insta", Facebook can be called "fb". So the launcher capsule should properly map the correct app with its commonly spoken name. Now the catch here is that there can be many apps starting from same prefix. For eg Instagram, Instamart, Instagram Lite all start with "insta", so our capsule much define certain conditions such that it opens a specific app, maybe based on previous user search history. Or else it should display a list of apps to the user.
I have done a similar project. For now I have launched an app via the URI, check it out.
Part of it is for debugging purposes.
result-view {
match {
RetrievedItem (this) {
from-output: SearchSong (output)
}
}
message("") // override empty string to avoid displaying default result dialog. If you want to debug, you can print `#{value(this.uri)}` here instead
app-launch {
payload-uri ("#{value(this)}")
}
render{
layout{
section{
content{
paragraph{
value("URI: #{value(this)}")
}
}
}
}
}
}
app-launch can launch an app installed on device with Android deep-link URI. Please see more in https://bixbydevelopers.com/dev/docs/reference/type/result-view.app-launch
However, I'd like to share a few thoughts:
A capsule is voice assistant focused. I might be wrong, but to me Facebook, Instagram are rather common in their full names when people are speaking. The shorted version are usually in texting.
A capsule is also focused on assisting user with a specific task. By design (for security and privacy) that a capsule is isolated from the device. Currently a capsule cannot detect or check the list of apps installed on device. In the question's example, if utterance is "launch insta", capsule must decide either Instagram, Instamart, InstaXYZ before fill-in the URI. Capsule is unable to search and launch an app start with "Insta" or prompt a list of apps installed on device start with "Insta".
For common apps, such as Google maps, Facebook, Netflix, the utterance "launch [app name]" is already supported. For example, utterance "Launch facebook" already works. Assume this capsule is called "My launcher", the new utterance would be "Ask my launcher to launch fb", which would be difficult for user to adopt.
In practice, app-launch is used with a specific app at mind. For example, a workout app developer created a capsule so that users can add an entry to workout app and launch the app in one utterance.
Hmm, a final note, seems Bixby already learned fb. The utterance "launch fb" works... LOL
I am completely new to Bixby development so I apologize in advance if this is a newby question that doesn't make sense. I'm trying to understand the best way to store value sets returned from external APIs to use throughout Bixby Voice experiences. An example might be an API that gets all the menu items at a restaurant or an API that gets all the clothing catalog items from a store. When users interact with the data to search or transact I don't want to have to go back to the external API to get the value set again. For example: Find Vegan Menu options followed by Okay how about pescatarian options. Or: Find dress pants followed by okay how about dress shirts. I'd like to come back to a menu object in the first case or a catalog object in the second without having to re-load the value sets from the API.
In the sample code I've seen all of the value sets appear to be read in each time an action/endpoint/java call is made
There is no local storage in the current version of Bixby.
The easiest solution is to request through API calls. However, http.getUrl() itself is cached by default, and Bixby runs on Samsung server, so no actual API calls in practice when requesting same url in short sessions.
You can read more about http API options and how to disable cache feature by reading more here
I've set up a simple QnA bot which is linked to a QnA service. Within the QnA service I have set up some questions which have follow up prompts(Dependents) e.g. how do I get to a campus, via bus, train etc. see image in link, within the Qna maker testing function you can just click a button called enable mutli-turn which provides functional buttons to inform you of what can/should be asked next via the dependents of the answer See image in link.
However when used within a channel/in the emulator nothing of the like appears see image, which is a bit odd. And obviously I want to implement such functionality in to the bot as it makes life so much easier for the users.
I am new to the whole bot thing(I started last month), so I have a browsed the internet to see what I could find but I could not see anything out side of writing the questions within the bot it self, see Microsofts documentation, which makes using QnA maker pretty much pointless.
What I think I need to do is intercept the message from QnA maker as it replies to the user, look at the Json received to find if has any dependents then run a different dialog, which gets the contextual dependents names and runs a simple for loop generating cards for each dependents, then send the message to the user with the generated cards, however I'm not sure how to intercept the Json and look for any dependents, or there is a button the I need to click within azure which just does it.
There is this experimental sample that has been released by the Bot Framework team which demonstrates how to handle follow-up prompts.
You can download it (you will have to download the whole repo) then plug in your details to the appsettings.json file and you should be able to test it using the Bot Framework Emulator - these were the only steps that I had to perform.
The key part is this method which checks to see if the result contains any prompts and returns the response accordingly - it is called inside the FuctionDialog.
If you're only ever going to be implementing a single level of prompts i.e. you have a question which shows prompts and when you click on one of these prompts it will display an answer rather than taking you to another prompt, then it is possible to take the guts of the logic from the ProcessAsync method (checking for prompts) along with the required classes from the Models folder and the CardHelper class and get this to work in your existing application - you won't have to worry about the QnABotState because you'll only be going a single level deep so you don't need to track where you are in the series of prompts. e.g.
var query = inputActivity.Text;
var qnaResult = await _qnaService.QueryQnAServiceAsync(query, new QnABotState());
var qnaAnswer = qnaResult[0].Answer;
var prompts = qnaResult[0].Context?.Prompts;
if (prompts == null || prompts.Length < 1)
{
outputActivity = MessageFactory.Text(qnaAnswer);
}
else
{
outputActivity = CardHelper.GetHeroCard(qnaAnswer, prompts);
}
await turnContext.SendActivityAsync(outputActivity);
Could someone please advice where do we add this code mentioned above? I am a rookie, have very basic knowledge about programming. Using visual studio with C# for this. How and where do I add this code to make it work? I am also not diving too deep. Just trying to make some simple logic where a user clicks on a few follow up prompts and is taken to the required information. Would really appreciate if someone could help. Thanks
First picture shows the starting follow up prompt.
Second picture that follows the first followup prompt
During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.
We have an application in which we will be collecting addresses from users. In the current implementation, we are using a live agent to do this. Some users, when prompted for a final billing address, will say things like "Just use my billing address" or "same as my current address". THe new implementation will be a chatbot to try and fulfill some of these requests before they get to an agent.
We do have this information available via API lookup, I am asking more from a design perspective how to let our handler app (usually an AWS lambda) know that we need to do the lookup before we prompt to confirm fulfillment.
A few things I thought of:
Train the NLP to detect strings "current address" and "billing address" as Address entities
Create a new intent for utterances like these and handle them separately
Create a new entity type in the current intent (eg, not postalAddress) for utterances like these and handle them as part of the same fulfillment
Simply re-prompting the user, or asking them to state what their address is
I am just looking for the most pragmatic approach here, as this problem is different from most others we've solved.
I had a similar use case, and after investigation found that option 3 is the easiest way to handle this.
You can add a validation hook that fires when the new slot is populated. This hook can populate the value of the postalAddress slot with the associated address. This way you can keep the postalAddress slot as a required slot, without having the user manually state the address.
You can also have this validation hook fire on the population of postalAddress and add some manual testing for billing and current, but this felt to me like a manual work around for something that should be automated by Lex.