I'm using wit to recognize different intents in a retail context. Some of them trigger (successfully) FAQ answers, other initiate a business logic.
Surprisingly, I'm having a lot of trouble with the most basics conversational intents, like answering a hi or hello. Specially if they come as a single word (it doesn't get hi or hello but it successfully returns the correct intent for hi buddy or hey dude). Obviously there's a high chance that the first thing an user would say is just a simple hello, any of you found the same issue? Any guidance on that?
It is actually the first time I experience this issue, and I haven't heard about it. Could it be related to the increasing number of intents created (now 15+)? I'm using trait as a search strategy.
Greetings intent
Click on image for a larger version of the image.
Thank you very much for your help,
Related
I want there to be a way to force these two items to become learned or at least to reveal to me why they are not Not Learned. But I can't figure out what if anything I am supposed to do.
I have run into this a couple of times where I was "sure" my training was correct, yet I got the "not learned" status
I suggest you check to see if runs properly using the "Aligned NL" feature in the V2 training when you choose Run in the Simulator
If that works, the fix I'm using might work for you.
I found that if I add a vocab to work with my enum, then my training gets learned
When training utterances aren't being learned, odds are that there are conflicting examples that Bixby cannot reconcile.
In this case, I would recommend first attempting to teach Bixby a few "follow" utterances until you can confirm Bixby is able to learn that. Then, attempt a few "unfollow" utterances. If the previously learned "follow" utterances become unlearned, the "follow" and "unfollow" utterances are conflicting.
I would also recommend that you look through the training documentation as it explains the nuance of training Bixby.
I'm new to using Dialogflow,and I want to create a simple DialogFlow bot that can answer basic addition,subtraction,multiplication,division questions.How would I code it that it responds to the specific question asked by the user? For example if I made a math intent, I used the training phase "What's 2 x 3", and I made the response "6". Now, I want to add more training phases and I need the bot to use the correct response. Also, another problem is that it would take an impossible amount of time to teach it every possible math question,so is there code I could use to change that?
The easiest way to be able to answer every combination of math question would be by using a fulfillment webhook. Here you can use code to do the calculations based on the user input. You could create an addition intent where you train the bot to recognize addition input and you would connect it to code in your webhook which can do additions and return the response. You can then also add intents for subtracting, multiplication and divisions and connect each of those intents to code which can do the math.
For the setup, you have two options. You can write code in the inline editor in Dialogflow or host your code in your own server and connect dialogflow to the url of that server. More info on that can be found here.
While using Knowledge Base in Dialogflow, $Knowledge.Answer[1] is returning a response whereas $Knowledge.Answer[2] or $Knowledge.Answer[3] are not working. Any idea?
According to googles Dialogflow documentation.
You can see from the documentation that , Multiple responses are triggered only when you have multiple answers for the same question. And how do you give multiple responses?
Well this bugged me a lot too. But it is simple. Just give the same question twice with different answer or give a slightly different question with different answer.
Example(FAQ.txt / UTF-8):
How big is google?,Google is the universe.
How big is google?, It is the biggest in the world.
Now go to Dialogflow console and type in this question. And tada. You get two responses.
This is a sample response which I am getting for the question I asked.
So Knowledge.Answer[1] and Knowledge.Answer[2] and so on, is triggered only when there are multiple answers to that question.
Hope this helps.
There's a Bike Shop sample on Github that gives a walkthrough of how to get knowledge connectors setup for your agent via uploading a .csv file as the data source.
I am making a chat bot to answer questions on a particular subject(example, physics). How would you structure all the possible questions as intent in dialogflow?
I am considering the following 2 methods,
Methods:
make each question as an unique intent.
group all the questions into one "asking questions" intent and use entity to identify the specific question being asked.
Pros:
Dialogflow can easily match users input to the specific questions using low confidence score threshold, and can give multiple training phrases per question.
Only need one "asking questions" intent, neater and maintaining it is easier.
Cons:
There will be tons of intents, and maintaining it might be a nightmare. Might also reach the max number of intents.
Detecting entity might be more strict and less robust.
I would suggest you to try Knowledge Base feature of DialogFlow.
You can give multiple web-page links from where it can gather all the questions, or you can manually prepare a list and upload it to DialogFlow.
That way you don't need to make it in separate intents, it will try to match it automatically.
Let me know if you have any confusion.
This looks like an FAQ type chatbot. You can develop the chatbot in 2 ways:
Use Prebuilt Agents - Go to prebuilt agent and select and import FAQ and add your intents.
Use Knowledge Base approach - This is in Beta mode right now, but super easy to build.
a. You need to enable Beta Features from the agent settings
b. Go to Knowledge Base on the left menu, create a new document and upload CSV file (Q and A). You can also provide a link for Q/A if you have.
Check out the documentation for more details.
Knowledge Base seems to be the best way, but it only supports English content
My bot reads and replies in simple mail conversation. More like in chat manner, one or 2 sentences only done through email. My backend is taking care of reading emails, interpreting api.ai responses, storing locally useful data and sending next questions. Before sending to api.ai, messages are split in sentences.
What I've seen from example conversations already done by humans is that the end users are quite often sending several significant information in one sentence. That means that from e.g. 8 possible peaces of information I totally can have (mostly non required) I can get in one sentence any 2 of them.
How to organize that?
I started with one intent for each field I require. But to solve case with any two intents in one sentence, I am extending user says examples with other field too. At the end I will have 8 intents which are actually filled with similar examples.
Now I am thinking to have just one intent and cover all in it. That might work, but the real question is that really way to do it?
Here are example conversations to describe issue better
v1 - simple way like in api.ai examples
- u: Hi. I need notebook bellow $700.
- b: Great. What size should it be?
- u: 17'
- b: I have gaming one at $590 and one professional for $650.
- u: I more to gaming one.
v2 - what I can expect from real life examples
- u: Hi I would like to buy 15 inch gaming laptop.
- great, what price range?
- ...
Api.ai has a feature called slot filling that allows to collect parameter values within a single intent. It's great for building conversational interfaces. You can see if it's compatible with you use case.
Here's how such intent could look like for the examples you provided:
See the "book_notebook" intent:
and how it would work in conversation:
See a test for the "book_notebook" intent: