User: Show me my recommendation (added into training)
BixBy: shows some results
user: Show me my recommendations (Not added into training)
BixBy: Custom error message
Both utterances are the same but still one is giving me the result and another one is not. Please let me know the solution.
Plurals are not something that Bixby training will handle for you in general. You should add training for the plural forms if you expect the user will say such things.
Related
I want there to be a way to force these two items to become learned or at least to reveal to me why they are not Not Learned. But I can't figure out what if anything I am supposed to do.
I have run into this a couple of times where I was "sure" my training was correct, yet I got the "not learned" status
I suggest you check to see if runs properly using the "Aligned NL" feature in the V2 training when you choose Run in the Simulator
If that works, the fix I'm using might work for you.
I found that if I add a vocab to work with my enum, then my training gets learned
When training utterances aren't being learned, odds are that there are conflicting examples that Bixby cannot reconcile.
In this case, I would recommend first attempting to teach Bixby a few "follow" utterances until you can confirm Bixby is able to learn that. Then, attempt a few "unfollow" utterances. If the previously learned "follow" utterances become unlearned, the "follow" and "unfollow" utterances are conflicting.
I would also recommend that you look through the training documentation as it explains the nuance of training Bixby.
I have created a pizza bot in dialogflow. The scenario is like..
Bot says: Hi What do you want.
User says : I want pizza.
If the user says I want watermelon or I love pizza then dialogflow should respond with error message and ask the same question again. After getting a valid response from the user the bot should prompt the second like
Bot says: What kind of pizza do you want.
User says: I want mushroom(any) pizza.
If the user gives some garbage data like I want icecream or I want good pizza then again bot has to respond with an error and should ask the same question. I have trained the bot with the intents but the problem is validating the user input.
How can I make it possible in dialogflow?
A glimpse of training data & output
If you have already created different training phrases, then invalid phrases will typically trigger the Fallback Intent. If you're just using #sys.any as a parameter type, then it will fill it with anything, so you should define more narrow Entity Types.
In the example Intent you provided, you have a number of training phrases, but Dialogflow uses these training phrases as guidance, not as absolute strings that must be matched. From what you've trained it, it appears that phrases such as "I want .+ pizza" should be matched, so the NLU model might read it that way.
To narrow exactly what you're looking for, you might wish to create an Entity Type to handle pizza flavors. This will help narrow how the NLU model will interpret what the user will say. It also makes it easier for you to understand what type of pizza they're asking for, since you can examine just the parameters, and not have to parse the entire string again.
How you handle this in the Fallback Intent depends on how the rest of your system works. The most straightforward is to use your Fulfillment webhook to determine what state of your questioning you're in and either repeat the question or provide additional guidance.
Remember, also, that the conversation could go something like this:
Bot says: Hi What do you want.
User says : I want a mushroom pizza.
They've skipped over one of your questions (which wasn't necessary in this case). This is normal for a conversational UI, so you need to be prepared for it.
The type of pizzas (eg mushroom, chicken etc) should be a custom entity.
Then at your intent you should define the training phrases as you have but make sure that the entity is marked and that you also add a template for the user's response:
There are 3 main things you need to note here:
The entities are marked
A template is used. To create a template click on the quote symbol in the training phrases as the image below shows. Make sure that again your entity is used here
Make your pizza type a required parameter. That way it won't advance to the next question unless a valid answer is provided.
One final advice is to put some more effort in designing the interaction and the responses. Greeting your users with "what do you want" isn't the best experience. Also, with your approach you're trying to force them into one specific path but this is not how a conversational app should be. You can find more about this here.
A better experience would be to greet the users, explain what they can do with your app and let them know about their options. Example:
- Hi, welcome to the Pizza App! I'm here to help you find the perfect pizza for you [note: here you need to add any other actions your bot can perform, like track an order for instance]! Our most popular pizzas are mushroom, chicken and margarita? Do you know what you want already or do you need help?
Not able to identify simple phrases like "my name is not Harry, it's Sam".
It is giving me name as harry and company name as Sam, Since name and company name was required in the same sentence.
It should have taken name as Sam and prompted the user again for company name OR should have given complete fallback.
Hi and welcome to Stackoverflow.
Dude. This is not a simple phrase.
Negative questions are always very difficult to catch by Dialogflow.
Suppose I have a question like,
I want to check *google* revenue for the year *2017*
As you can see, google and 2017 are the entities.
But now in the same way if you say,
I don't want to check *google* revenue for the year *2017*
The chances of hitting that old intent is very high as dialogflow matches almost 90% of this sentence with your old sentence. So it might fail.
Hope you are trying to ask something similar to this.
Anyhow coming to your point, If company name and name are different entities, then
Two things you can avoid:
As everyone mentioned,check your entities. The values should not be present in both the entities. This will fail because dialogflow will not know whether it should treat 'Sam' as your name or company name.
If you are not using the values from an entity, instead using '$ANY', then It has a very high chance of failing. And If you are using Dialogflow's system entity like, $given-name, then that is also not preferred as it does not catch all the names. So avoid these entities.
Things you can try:
Train Train And Train. As you would be aware, the training section in dialogflow is pretty good. Train it a few times and it will automatically learn and master it.
But , please note: Wrong training will result in wrong results. It should be 100% accurate. Always check before you approve a training.
And try using webHooks, actions, and/or events to figuring your way out from an external API.
I've made an intent to detect user's answer when they say, for example "when shop is closed" where "closed" is an entity.
When I give the input exactly the same as my training phrase, "when shop is closed", everything is working as expected and and dialogflow correctly detected the intent and the entity value (as shown in the second screenshot).
However, when I input a slight variant to the training phrase, by adding extra words "I think" in front of the sentence, dialogflow still correctly detected the intent, but however this time the parameter value is empty. (as shown in first screenshot)
I will need the value to be detected in both cases, and can't figure out why this is happening.
Screenshot 1
Screenshot 2
Google has published best practices for conversation design here, which should help:
https://developers.google.com/actions/assistant/basics
In this case, have you tried adding, "When is the shop closed?" as a training phrase? Clarifying verb tenses and sentence structure might help Dialogflow correctly identify the parameters you're hoping to extract from a user's given intent.
I know that i can make a none intent to cover some of these, however we cannot just create every nonsense question a person could ask.
Or even if someone types in a 50 word statement. The bigger problem is that if we get a query to LUIS, it is assigning it an intent that is not correct, without even having identified any entities either.
What to do?
To handle these cases, it would be better to add more labeled utterances to your other intents and occasionally add the stray utterances to the None intent. When the model is better for predicting your non-None intents, the better predicting of None intents also accompany this (LUIS attempts to match to an intent rather than cutting intents out).
If intents are triggering without any entities being recognized (and thus you believe the wrong intent has been triggered), this should be handled at an application level, where you would then disambiguate the intents back to your users. If you've set the verbose flag to true, then you could take the top three scoring intents and present those back as options to your user. Then you can move back into the proper dialog.
After you've moved into the intent/dialog they meant to access, you can conduct a programmatic API call to add that utterance to the intent. Individually adding labeled utterances can be problematic (the programmatic API key has a limit of 100,000 transactions per month, and a rate of 10 transactions per second), so you can instead aggregate the utterances and conduct batch labeling. An additional bit of info; there is a limit of 100 labeled utterance per batch upload.
Adding to the Steven's answer - in the intent window, you have the Suggested Utternaces tab - this is also a hint for the algorithm, kind of reinforced learning approach.