I'm testing my LUIS app, I have an Intent called "Services" and other called "Insults" (for filtering insults, right). In "Services" Intent I have Utterances like "I want to see all the services" or "services" but when I test the word "servi" or "serv" it returns the "Insults" Intent instead of the "None" or "Services" Intent.
Let's say that LUIS is getting so strict and is only returning an Intent when I test an utterance that's EXACTLY the same I wrote at the Utterances of that Intent.
What can be causing this?
LUIS learns based on active learning. The algorithm should pick the unlabeled data that should be labelled depending on the confidence of the system in the prediction. In the active learning process, LUIS examines all the endpoint utterances, and selects utterances that it is unsure of. If you label these utterances, train, and publish, then LUIS identifies utterances more accurately. Have you trained any examples under None intent if not please do provide more training examples under "Insults" intent and "None" intent. Hope this helps.
Related
My Training Phrases do not seem to work in DialogFlow
What I did was I added training phrases for the chatbot. After adding the phrases, I save the intent then try it out on the chatbot. When I type in any of the phrases, it just goes to the fallback intent. Does anyone know how to fix this?
How do I fix this error?
I have an agent on Dialogflow with some fixed number of intents. Currently whenever a phrase does not trigger an intent(i.e trigger a default intent), i manually add the phrase to some existing intent or create a new intent and add the phrase to it.
I want to automate this task of adding the new phrase to a particular intent using some machine learning, nlp classifier.
I have trained a intent classifier to classify the intent based on the phrase, but i am not really sure what my training data should be for this task.
Please refer to following diagram. The diagram shows the thing i want to achieve.
https://drive.google.com/file/d/1-6VRHuxM7E5-7iBVu1uo5SmYZpf6y71l/view?usp=sharing
When we have to focus on only one domain (say weather) and we use a LSTM model to identify sub-intents inside weather using softmax classifier (which picks the sub-intent with highest score), what is the way to handle non-weather queries for which we want to say we don't have any answer? The problem is that there are too many outside domains and I don't know if it is feasible to generate data for all of them.
There is no really good way to do this.
In practice these are common approaches:
Build a class of examples of stuff you want to ignore. For a chatbot this might be greetings ("hello", "hi!", "how are you") or obscenities.
Create a confidence threshold and give an uncertain reply if all intents are below the threshold.
For the intent detection, do you have a negative sampling implementation on the machine learning algorithm of Dialogflow?
If yes, does Dialogflow get those negative samples from other intents?
If this is the case, then i will be eager create new intents to support negative sampling.
If you add a sys.ignore prefix to an intent name, then all matches to that intent will result in no match/default fallback intent.
I am developing an apps that use wit ai as a service. Right now, I am having problems training it. In my apps I have 3 intents:
to call
to text
to send picture
Here are my example training:
Call this number 072839485 and text this number 0623744758 and send picture to this number 0834952849.
Call this number 072839485, 0834952849 and 0623744758
In my first training I labeled that sentence with all 3 intents, and 072839485 as phone_number with role to_call_phone_number, 0623744758 as phone_number with role to_text_phone_number and 0834952849 as phone_number with role to_send_pic_phone_number.
In my second training I labeled all the 3 numbers as phone_number with to_call_phone_number role.
After many training, the wit still output the wrong labelled. When the sentence like this:
Call this number 072637464, 07263485 and 0273847584
The wit says 072637464 is to_call_phone_number but 07263485 and 0273847584 are to_send_pic_phone_number.
Am I not correctly training it? Can some one give me some suggestions about the best practice to train wit?
There aren't many best practices out there for wit.ai training at the moment, but with this particular example in mind I would recommend the following:
Pay attention to the type of entity in addition to just the value. If you choose free-text or keyword, you'll get different responses from the wit engine. For example: in your training if the number is a keyword, it'll associate the particular number with the intent/role rather than the position. This is probably the reason your training isn't working correctly.
One good practice would be to train your bot with specific examples first which will provide the bot with more information (such as user providing keyword 'photograph' and number) and then general examples which will apply to more cases (such as your second example).
Think about the user's perspective and what would seem natural to them. Work with those training examples first. Generate a list of possible training examples labelling them from general to specific and then train intents/roles/entities based on those examples rather than thinking about intents and roles first.