Dialog flow default fall back - dialogflow-es

Hello So I am creating a chat bot with two language, A is supported and B is not supported(I used A and B so it will be easier to read) and the issues is I want to have default fall back for both but when I type random word for B it always call the default fallback of A. I also tried create a normal intent and name it fallback for B and add some phrase and it work but when I type random word for A it call the fallback of B. Is there any work around for the issues?
Would appreciate any answer :)

I'm afraid there won't be a good work around for this as Dialogflow works with one NLP model per language. By trying to fit two languages into one model, you are creating a difficult scenario. The fallback intents are meant as a safety net in case of unrecognized input for the language of the NLP model, your second languages will always end up in the fallback intent as it is unrecognized input for the first language.
Yes you could create a custom fallback intent by entering words manually, but this isn't a valid solution since you cant fit every word of a language into an intent. So you will end up with certain words of the second language going into the custom fallback and some not.
In general it isn't recommended to fit two languages in a NLP model, so my recommendation would be to drop the unsupported language and wait for it to become supported, this will give you the best bot and experience.
If you would really need the second language, one thing you could try is add another supported language that you won't be using and train that on words of your unsupported language. Note: This NLP model will be very restricted in its features as it will only respond to the words you trained it to, build in entities won't work as your language is still unsupported, but it allows you to do some work with an unsupported language, but again, it will be very limited.

Related

LUIS: Identify "normalized" value based on synonyms automatically

I am currently developing a chatbot that recommends theater plays to the user.
It is working pretty well, but now I want to enable the user to get recommendations based on the type of theater plays (like funny, dramatic, sad).
Nevertheless, as I do not know how exactly the user is phrasing the request also synonyms might be used (funny: witty, humourous, ...)
What is a good solution to get these types from the user's request in a normalized way?
Typically I would use the List entity, but then I have to insert all synonyms for each possible value by myself. Is there a way how i can define my "normalized" values and synonyms are automatically matched by LUIS (and improved by further training of the model)

Difference between DialogFlow and Google Cloud Natural Language product

Both DialogFlow and Google Cloud NL (Natural Language) are under Google, and to me they are very similar. Does anyone know any specific on their differences and whether Google will consolidate into one product? If I am a new developer to use the features, which one I should pick?
I search around and cannot find any satisfactory answers.
Thanks!
While they are vaguely similar, since they both take text inputs, the results from each are somewhat different.
By default, GCNL doesn't require you to provide any training phrases at all. It takes any sorts of textual input and lets you do things such as sentiment analysis, parts of speech analysis, and sentence structure analysis on the phrase.
If you are expecting very free-form inputs, then GCNL is very appropriate for what you want.
On the other hand, Dialogflow requires that you provide training phrases that are associated with each Intent and possible parameters for some of the words in those phrases. It then tries to take the input and determine which Intent matches that input and how the parameters match.
If you have a more narrow set of commands, and just want a way to more flexibly have people issue those commands in a conversation, Dialogflow is more appropriate.
It is unlikely the two will ever be merged. Dialogflow is well tuned to make conversational interfaces easier to develop, while GCNL is more open-ended, and thus more complex.

How to structure many questions intent in dialogflow

I am making a chat bot to answer questions on a particular subject(example, physics). How would you structure all the possible questions as intent in dialogflow?
I am considering the following 2 methods,
Methods:
make each question as an unique intent.
group all the questions into one "asking questions" intent and use entity to identify the specific question being asked.
Pros:
Dialogflow can easily match users input to the specific questions using low confidence score threshold, and can give multiple training phrases per question.
Only need one "asking questions" intent, neater and maintaining it is easier.
Cons:
There will be tons of intents, and maintaining it might be a nightmare. Might also reach the max number of intents.
Detecting entity might be more strict and less robust.
I would suggest you to try Knowledge Base feature of DialogFlow.
You can give multiple web-page links from where it can gather all the questions, or you can manually prepare a list and upload it to DialogFlow.
That way you don't need to make it in separate intents, it will try to match it automatically.
Let me know if you have any confusion.
This looks like an FAQ type chatbot. You can develop the chatbot in 2 ways:
Use Prebuilt Agents - Go to prebuilt agent and select and import FAQ and add your intents.
Use Knowledge Base approach - This is in Beta mode right now, but super easy to build.
a. You need to enable Beta Features from the agent settings
b. Go to Knowledge Base on the left menu, create a new document and upload CSV file (Q and A). You can also provide a link for Q/A if you have.
Check out the documentation for more details.
Knowledge Base seems to be the best way, but it only supports English content

Is it possible to use DialogFlow simply to parse text?

Is it possible to use DialogFlow to simply parse some text and return the entities within that text?
I'm not interested in a conversation or bot-like behaviour, simply text in and list of entities out.
The entity recognition seems to be better with DialogFlow than Google Natural Language Processing and the ability to train might be useful also.
Cheers.
I've never considered this... but yeah, it should be possible. You would upload the entities with synonyms. Then, remove the "Default Fallback Intent", and make a new intent, called "catchall". Procedurally generate sentences with examples of every entity being mentioned, alone or in combination (in whatever way you expect to need to extract them). In "Settings", change the "ML Settings" so the "ML Classification Threshold" is 0.
In theory, it should now classify every input as "catchall", and return all the entities it finds...
If you play around with tagging things as sys.any, this could be pretty effective...
However, you may want to look into something that is built for this. I have made cool stuff with Aylien's NLP API. They have entity extraction, and the free tier gives you 1,000 hits per day.
EDIT: If you can run some code, instead of relying on SaaS, you could check out Rasa NLU for Entity Extraction. With a SpaCy backend it would do well on recognizing pre-trained entities, and with a different backend you can use custom entities.

Natural Language Generation - how to test if it sounds natural

I just have a set of sentences, which I have generated based on painting analysis. However I need to test how natural they sound. Is there any api or application which does this?
I am using the Standford Parser to give me a breakdown, but this doesn't exactly do the job I want!
Also can one test how similar sentences are? As I randomly generating parts of sentences and want to check the variety of the sentences produced.
A lot of NLP stuff works using things called 'Language Models'.
A language model is something that can take in some text and return a probability. This probability should typically be indicative of how "likely" the given text is.
You typically build a language model by taking a large chunk of text (which we call the "training corpus") and computing some statistics out of it (which represent your "model"), and then using those statistics to take in new, previously unseen sentences and returning probabilities for them.
You should probably google for "language models", "unigram models", "n-gram models" and click on some of the results to find some article or presentation which helps you understand the previous sentence. (Its hard for me to recommend an appropriate tutorial for you because I don't know what your existing background is)
Anyway, one way to think about language models is that they are systems that take in new text and tell you how similar the new text is to the training corpus the language model was made out of. So if you build 2 language models, one out of all the plays written by Shakespeare and another out of a large number of legal documents, then the second one should be giving you a much higher probability to sentences for some new legal document that just got released (as compared to the first model) while the first model should give you a much higher probability for some other old english play (written by some other author) because that play is probably more similar to Shakespeare (in terms of the kind of words used, sentence lengths, grammar, etc) than it is to modern legal language.
All the things you see the stanford parser give you back for a sentence you give it are generated using language models. One way to think about how those features are built is to pretend that the computer tried every possible combination of tags and every possible parse tree for the sentence you gave it, and used some clever language model to identify which is most probable sequence of tags and most probable parse tree out there, and returned those back to you.
Getting back to your problem, you need to build a language model out of what you consider natural sounding text and then use that language model to evaluate the sentences you want to measure the naturalness of. To do this, you will have to identify a good training corpus and decide on what type of language model you want to build.
If you can't think of anything better, a collection of wikipedia articles might serve to be a good training corpus representing what natural sounding english looks like.
As for model type, an "n-gram model" would probably be good enough for your task. More complicated models like "Hidden Markov Models" and "PCFG's" (the stuff that is powering the stanford page you linked to) would definitely make things even better, but n-grams are definitely the most simple thing you could start with.

Resources