If i have an intent like
'How much have I spent on'
that I'd like to match basically 'how much have i spent on ___________________'
where ____ could be any word or phrase.
I've been doing this sort of thing for some intents that do some fuzzy matching to determine what the user is speaking of and it works ok, but is it possible to do this in a reliable way, without requiring that a particularly specific phrase was uttered (which defeats the purpose of NLU to a degree)?
I have been looking for a keyword and assuming their "topic" is the remainder of the phrase, and it works, but seems like it will be prone to problems when the actual user doesn't say more or less what I intended.
I imagine I can reorganize to do this with a follow up intent, like "what category?" and then treat the entire response as what I was trying to parse out, I was just looking to avoid it if there was some sort of built in support for this concept.
Thanks!
I think you are on the right path.
You can use #sys.any entity to capture any word or phrase. And according to your use-case and what the intent is, you can add few variations of the sentences how much have i spent on #sys.any in the utterances.
You can also make use of slot-filling or some other fallback mechanism to validate the user input.
Related
I am trying to make a search algorithm with dialogflow that could take any combination of: first name, address, phone number, zip code or city as input to a search algorithm. The user does not need all of them, but we will refine our search with each additional answer until we only have one result. Basically we are trying to identify which customer we are talking to.
How should this type of intent (or set of intents) be structured? We have tried one intent with multiple parameters, but we do not need all of them to be required. We have also written a JavaScript function for fulfillment but how can we communicate back to dialogflow as to whether we need more information?
Thank you very much for your help.
Slot filling is designed for this purpose.
Hope that helps.
Please post more code/details to help answers be more specific.
First, keep in mind that Intents reflect what the user is saying, and not typically what you're replying with or what other information you need. Slot filling sometimes bends this rule, but only if you have required slots.
Since you don't - you need a different approach.
This can be done with a single intent, although you may find that multiple intents make it easier in some ways. The approach is broadly the same:
When you ask the question, make sure you set an Outgoing Context with a relatively short lifespan (2-3 is good) to indicate you are collecting user info.
Create an Intent (or Intents) that have sample phrases that capture the information you need.
Some of these will have obvious entity types (phone number and zip code) while others will be more difficult (First name has a system entity type, but it doesn't include all possible first names).
You will need to create sample phrases that collect the parameters by themselves, along with phrases that make sense. You're the best judge of this, and you should probably write some sample conversations before you write the phrases.
In your fulfillment, you'll figure out if you have enough information.
If you do, you can reply and clear the Context that was set. (Clearing it is important so Dialogflow doesn't match the information collecting Intent again.)
If you do not, you can add the information you have as parameters to the Context so you can save it for later processing, make sure you reset the Context lifespan (so it doesn't expire), and prompt the user for additional information. Again, having a conversation mocked out ahead of time will help here.
Say I have a sentence like 'I refuse to fly' or 'I'd like to fly'. I also have a sentence like 'I don't want to sit'. When training custom intents in one of the available NLU engines (rasa/wit/luis), what's the best way to go for modeling:
Naively I could have: RefuseFlyIntent,WantFlyIntent,and RefuseSit and WantSit
More sophisticated, have set of intents FlyIntent, SitIntent, WantIntent, RefuseIntent, and have my code process the combinations.
same question can apply for other cases, like how to model the difference between You Like To Fly and I Like To Fly
I'm sure there are known methodologies for that, wanted to understand what they are. If you could give me links to literature about it, would be great.
many thanks,
Lior
This is a common mistake people do when designing conversations. Intents point to a specific action. In your example, the action is whether or not to fly. To get a better understanding, If more than one statement looks alike with only a few words differing make it entities of a single intent.
Intent = Action Yes/No
- I refuse to fly -> entity {refuse:deny, action:fly}
- I'd like to fly -> {like: accept, action:fly}
- I don't want to sit -> {"don't want": deny, "action":sit}
Unless I've done something majorly stupid, it appears I only have one entry point into my Action on Google using Actions SDK and Node.js.
Consequently, I have to work out what the user has said by using some keywords with .indexOf() and then calling the appropriate function.
I thought that would also be simpler and there would be a way I could define an action with several phrases and Google would be intelligent enough to work it all out, even if the user said something slightly differently.
I guess one of the things Im doing wrong/different, is just by having a welcome intent that essentially has a conversation and asks "What would you like to do?" then the user responds, then I have to work out what was said, and follow up an appropriate action.
That seems quite long winded. Any better ways?
The "better way" is to use a tool that is designed for that and has a powerful and flexible Natural Language Processing engine associated with it. Actions directly support both Dialogflow and Converse.AI, and most other NLP engines should be able to provide information about how they work with Actions.
Dialogflow, for example, lets you specify some sample phrases that will meet an Intent, and then supplements that with "similar" phrases to the ones you've specified. Your Node.js webhook gets told which Intent was called, with what parameters you've specified for that Intent, and you can take action based on that information directly.
At this point, the Actions SDK is mostly intended to be used as the base that these and other NLP engines build on top of.
I'm trying to build a chat assistant in my website and it should answer queries like "Can you track my order?", "How's performance of XXX". The majority of the work lies in understanding the user's query.
I'm using 'Named Entity Recognizers' and "Text Parsers" for processing the queries. Before this, I'm passing the query through 'Spell checker' to reduce the errors like,
Can you track my ordr?
to
Can you track my order?
It's working in most of the cases but failing in cases like,
Can you track my water?
In this case, the spelling corrector doesn't correct the word 'water' and NER is not able identify the entity as 'order'.
The problem is 'Can you track my water?' may be a correct sentence in some other context but it's definitely a mistake in my context (domain). So I should be able to correct this sentence.
I'm stuck here.
Is there anyway I can correct these sentences using predefined queries and/or statistical data of user entered queries?
I don't know of a way you can change "water" to "order".
But if you have a predefined set of questions then you may give the user suggestions to select from, just before he submits the question.
NER may only recognize/classify entities it may not be used to replace parts of sentences, because the user may have intended what he said.
What you do is suggest most probabilistic word based on your set.
References:
What is the best way to find the most similar sentence?
Find semantically similar word
You could use n-gram models to find the most probable word and then use substitution. In your case, you substitute the word ordr by the word order. And if you want to go deeper you could use a machine learning model to handle the issue.
I'm working on a problem that at the very least seems to require named entity recognition, but I'm not sure how to go farther than the NER parse. What I'm trying to do is parse information (likely from tweets) regarding scheduling of events. So, for example, I'd like to be able to automatically resolve the yes/no answer to the question of "Are The Beatles playing tomorrow?" from short messages like:
"The Beatles cancelled their show tomorrow" or
"The Beatles' show is still on tomorrow"
I know NER will get me close as it will identify the band of interest and the time (if it's indicated), but there are many ways to express the concepts I'm interested in, for example:
"The Beatles are on for tomorrow" or
"The Beatles won't be playing tomorrow."
How can I go from an NER parsed representation to extracting the information of interest? Any suggestions would be much appreciated.
I guess you should search by event detection (optionally - in Twitter); maybe, also by question answering systems, if your example with yes/no questions wasn't just an illustration: if you know user needs in advance, this information may increase the quality of the system.
For start, there are some papers about event detection in Twitter: here and here.
As a baseline, you can create a list with positive verbs for your domain (to be, to schedule) and negative verbs (to cancel, to delay) - just start from manual list and expand it by synonyms from some dictionary, e.g. WordNet. Also check for negations - again, by presence of pre-specified words ('not' in different forms) in a tweet. Then, if there is a negation, you just invert the meaning.
Since you work with Twitter and most likely there would be just one event mentioned in a tweet, it can work pretty well.