My Training Phrases do not seem to work in DialogFlow
What I did was I added training phrases for the chatbot. After adding the phrases, I save the intent then try it out on the chatbot. When I type in any of the phrases, it just goes to the fallback intent. Does anyone know how to fix this?
How do I fix this error?
Related
I have an agent on Dialogflow with some fixed number of intents. Currently whenever a phrase does not trigger an intent(i.e trigger a default intent), i manually add the phrase to some existing intent or create a new intent and add the phrase to it.
I want to automate this task of adding the new phrase to a particular intent using some machine learning, nlp classifier.
I have trained a intent classifier to classify the intent based on the phrase, but i am not really sure what my training data should be for this task.
Please refer to following diagram. The diagram shows the thing i want to achieve.
https://drive.google.com/file/d/1-6VRHuxM7E5-7iBVu1uo5SmYZpf6y71l/view?usp=sharing
I just started with NLP in bots, where a user ask a question that is classified by LUIS and then forwarded to QnAMaker to get an answer, and I have noticed that it behaves strangely with Spanish since we have accented characters and double question marks (¿?). For example:
[1] ¿qué es NLP?
[2] que es NLP
If I train my model with the first one and test it with the second one, the model won't identify both of them with the same intent. This is a very common way to communicate in Spanish since some people tend to save time by avoiding accented charactes and punctuation.
My questions are:
Should I normalize every utterance in my model (removing accents,
punctuation, etc.)? Or should I train it with every different example?
Are there any guidelines for training NLP models that I can base my work in?
Should I normalize every utterance in my model (removing accents,
punctuation, etc.)? Or should I train it with every different example?
That really depends on what you'd want, but to not have to duplicate a bunch of work, it'd probably just be better to just normalize every utterance in your model.
Then what you could do on your bot level, is strip away characters that have accents or are considered "special"/replace with normalized characters, before sending the utterance to LUIS to predict intent with
Training phrases are appearing as Events in Training tab of dialogflow.
get me extra water bottles is training phare in one of my intents.
I'm testing my LUIS app, I have an Intent called "Services" and other called "Insults" (for filtering insults, right). In "Services" Intent I have Utterances like "I want to see all the services" or "services" but when I test the word "servi" or "serv" it returns the "Insults" Intent instead of the "None" or "Services" Intent.
Let's say that LUIS is getting so strict and is only returning an Intent when I test an utterance that's EXACTLY the same I wrote at the Utterances of that Intent.
What can be causing this?
LUIS learns based on active learning. The algorithm should pick the unlabeled data that should be labelled depending on the confidence of the system in the prediction. In the active learning process, LUIS examines all the endpoint utterances, and selects utterances that it is unsure of. If you label these utterances, train, and publish, then LUIS identifies utterances more accurately. Have you trained any examples under None intent if not please do provide more training examples under "Insults" intent and "None" intent. Hope this helps.
For the intent detection, do you have a negative sampling implementation on the machine learning algorithm of Dialogflow?
If yes, does Dialogflow get those negative samples from other intents?
If this is the case, then i will be eager create new intents to support negative sampling.
If you add a sys.ignore prefix to an intent name, then all matches to that intent will result in no match/default fallback intent.