Historical-Input Condition.
When, Where, How, Why, It occurs in Bixby Capsule.
I tried to fetch historical-input in an action. i.e., fetch Bixby Concept from Context.
I really want to know the condition for historical-input fetching.
Since you did not provide more context, I just try to answer it in general.
If Historical-Input means between Bixby sessions, that is, user exits Bixby and re-enter by press the button or say "Hi Bixby" again, fetching Historical-Input is not possible.
If Historical-Input means in one Bixby session, the newer utterance is always a continuation of previous one. Bixby will try to fill missing action inputs with Historical-Input (matching by the same concept type). Capsule developer must use training utterance to teach Bixby how to use Historical-Input. Without proper training, it's possible that same utterance results different planning route and use different Historical-Input.
Here are two topics might help you:
Add transient feature to force not using Historical-Input. Read more here.
Use Add Roll and route signal in training example to help planning. Read more here.
Related
At a high level, I would like to set up an action that has some required inputs and some optional ones. After the user begins, he/she will be prompted for any required inputs that are missing. If/when the required inputs are collected, i would like to ask if the user wants to specify more optional inputs.
The specific use case is a voice-based real estate search. I have some required inputs set up like zip code, price, and number of bedrooms. Then I would like bixby to ask "would you like to refine your search even further?" and if the user says yes, we can ask about number of bathrooms, parking arrangements, and other more niche parameters. I do not want to make all of these required and have to prompt everyone if they want to filter by "pools" or some parameter not widely used. And being voice-based, I do not want to just have it as a checkbox on the screen because someone on the speaker won't be able to use that parameter then.
I have thought of 2 potential solutions but I do not know if they will work (at least without relying on the controls on screen for a voice-based capsule):
1) Make the search into a transaction and then instead of a normal confirmation, try to shoe-horn the confirmation to ask if the user wants to add more refinements. Maybe something like the bank transfer one but a negative confirmation would cause bixby to ask for information that she didnt ask for before: https://bixbydevelopers.com/dev/docs/sample-capsules/walkthroughs/simple-transactional#sample-capsule-walkthrough
2) Make 2 more required inputs, one boolean called "WantsOptionalParameters" and the other called "OptionalParameters" that will be a structure containing all of the optional parameters. Then it would prompt for WantsOptionalParameters like a normal required input, and if that is "True", I can have a sub-action that will ask for each one of the parameters to construct an OptionalParameters object. then we could feed that output into the search. And alternatively, if WantsOptionalParameters is false, we can automatically construct OptionalParameters with all negative responses and feed that into the original action.
Both of these solutions will take a bunch more research and testing and i don't even know if they will work, so i was hoping to call on the wisdom of you guys!
Here is my take on it for what its worth. Every domain has key inputs that are typically used to start the conversation and optional inputs that can refine the conversation.
Some general ways to start conversation for the real estate example (totally driven by my own experience, perhaps there are more)
Hows the real estate market in 90210?
Show me homes under $250K in Los Angeles?
Show me homes with 4 bedrooms (near me?)
You can club such inputs into an input-group called RequiredInputs that requires OneOrMoreOf these parameters to get the capsule started.
You can also collect the optional/niche inputs in another input-group called OptionalInputs that requires ZeroOrMoreOf and feed them into your capsule logic
Its also possible that all inputs are equally important and are all Optional! It is totally dependent on the domain and how the capsule developer might handle such inputs.
But in a general sense, once the set of inputs is in and the initial results are shown to the user, the capsule developer then has a great amount of control to
Shape the future conversation AND
Highlight capsule capabilities.
So, rather than presenting the user with a set of options, you can control the conversation and offer options that provide most value to the user (and to the capsule developer!)
e.g your capsule is capable of deep analyzing and refining results in a way that no other capsule on the market can do. So, you want to highlight this capability as the first choice via followup
Or You may have a conversation path based on prior experiences and your knowledge of the domain. So, you could say, I can refine the results further by X, Y, Z options.
This scenario is more likely to be useful and less likely to overwhelm (with options) the end user.
Hope this helps!
I am working on a capsule that accepts an address and a zip code and will estimate the value of a property. At the results view, I would like to add a conversation driver to see if the user would want to get an estimate of another property. I would want to train an utterance to initiate the same action but also cause Bixby to forget the address and zip from the previous action.
From the testing i've done, it looks like continuations will hold on to values from the previous action. Is there a way to cause a continuation to forget those values? Or is there another way to accomplish this sort of action-repeat?
You might want to look into the transient feature (documentation link) that you can define for concepts (zipcode, address, etc.).
Concepts marked transient will not be preserved in context and should provide you with what you need.
I am trying to make a search algorithm with dialogflow that could take any combination of: first name, address, phone number, zip code or city as input to a search algorithm. The user does not need all of them, but we will refine our search with each additional answer until we only have one result. Basically we are trying to identify which customer we are talking to.
How should this type of intent (or set of intents) be structured? We have tried one intent with multiple parameters, but we do not need all of them to be required. We have also written a JavaScript function for fulfillment but how can we communicate back to dialogflow as to whether we need more information?
Thank you very much for your help.
Slot filling is designed for this purpose.
Hope that helps.
Please post more code/details to help answers be more specific.
First, keep in mind that Intents reflect what the user is saying, and not typically what you're replying with or what other information you need. Slot filling sometimes bends this rule, but only if you have required slots.
Since you don't - you need a different approach.
This can be done with a single intent, although you may find that multiple intents make it easier in some ways. The approach is broadly the same:
When you ask the question, make sure you set an Outgoing Context with a relatively short lifespan (2-3 is good) to indicate you are collecting user info.
Create an Intent (or Intents) that have sample phrases that capture the information you need.
Some of these will have obvious entity types (phone number and zip code) while others will be more difficult (First name has a system entity type, but it doesn't include all possible first names).
You will need to create sample phrases that collect the parameters by themselves, along with phrases that make sense. You're the best judge of this, and you should probably write some sample conversations before you write the phrases.
In your fulfillment, you'll figure out if you have enough information.
If you do, you can reply and clear the Context that was set. (Clearing it is important so Dialogflow doesn't match the information collecting Intent again.)
If you do not, you can add the information you have as parameters to the Context so you can save it for later processing, make sure you reset the Context lifespan (so it doesn't expire), and prompt the user for additional information. Again, having a conversation mocked out ahead of time will help here.
I have created a pizza bot in dialogflow. The scenario is like..
Bot says: Hi What do you want.
User says : I want pizza.
If the user says I want watermelon or I love pizza then dialogflow should respond with error message and ask the same question again. After getting a valid response from the user the bot should prompt the second like
Bot says: What kind of pizza do you want.
User says: I want mushroom(any) pizza.
If the user gives some garbage data like I want icecream or I want good pizza then again bot has to respond with an error and should ask the same question. I have trained the bot with the intents but the problem is validating the user input.
How can I make it possible in dialogflow?
A glimpse of training data & output
If you have already created different training phrases, then invalid phrases will typically trigger the Fallback Intent. If you're just using #sys.any as a parameter type, then it will fill it with anything, so you should define more narrow Entity Types.
In the example Intent you provided, you have a number of training phrases, but Dialogflow uses these training phrases as guidance, not as absolute strings that must be matched. From what you've trained it, it appears that phrases such as "I want .+ pizza" should be matched, so the NLU model might read it that way.
To narrow exactly what you're looking for, you might wish to create an Entity Type to handle pizza flavors. This will help narrow how the NLU model will interpret what the user will say. It also makes it easier for you to understand what type of pizza they're asking for, since you can examine just the parameters, and not have to parse the entire string again.
How you handle this in the Fallback Intent depends on how the rest of your system works. The most straightforward is to use your Fulfillment webhook to determine what state of your questioning you're in and either repeat the question or provide additional guidance.
Remember, also, that the conversation could go something like this:
Bot says: Hi What do you want.
User says : I want a mushroom pizza.
They've skipped over one of your questions (which wasn't necessary in this case). This is normal for a conversational UI, so you need to be prepared for it.
The type of pizzas (eg mushroom, chicken etc) should be a custom entity.
Then at your intent you should define the training phrases as you have but make sure that the entity is marked and that you also add a template for the user's response:
There are 3 main things you need to note here:
The entities are marked
A template is used. To create a template click on the quote symbol in the training phrases as the image below shows. Make sure that again your entity is used here
Make your pizza type a required parameter. That way it won't advance to the next question unless a valid answer is provided.
One final advice is to put some more effort in designing the interaction and the responses. Greeting your users with "what do you want" isn't the best experience. Also, with your approach you're trying to force them into one specific path but this is not how a conversational app should be. You can find more about this here.
A better experience would be to greet the users, explain what they can do with your app and let them know about their options. Example:
- Hi, welcome to the Pizza App! I'm here to help you find the perfect pizza for you [note: here you need to add any other actions your bot can perform, like track an order for instance]! Our most popular pizzas are mushroom, chicken and margarita? Do you know what you want already or do you need help?
Unless I've done something majorly stupid, it appears I only have one entry point into my Action on Google using Actions SDK and Node.js.
Consequently, I have to work out what the user has said by using some keywords with .indexOf() and then calling the appropriate function.
I thought that would also be simpler and there would be a way I could define an action with several phrases and Google would be intelligent enough to work it all out, even if the user said something slightly differently.
I guess one of the things Im doing wrong/different, is just by having a welcome intent that essentially has a conversation and asks "What would you like to do?" then the user responds, then I have to work out what was said, and follow up an appropriate action.
That seems quite long winded. Any better ways?
The "better way" is to use a tool that is designed for that and has a powerful and flexible Natural Language Processing engine associated with it. Actions directly support both Dialogflow and Converse.AI, and most other NLP engines should be able to provide information about how they work with Actions.
Dialogflow, for example, lets you specify some sample phrases that will meet an Intent, and then supplements that with "similar" phrases to the ones you've specified. Your Node.js webhook gets told which Intent was called, with what parameters you've specified for that Intent, and you can take action based on that information directly.
At this point, the Actions SDK is mostly intended to be used as the base that these and other NLP engines build on top of.