We built an intent to detect user's last four digits of social security numbers. The training phrases capture #sys.number-sequence and #sys.number entities. We match the intent using voice (audio). When the digits are read out separately, #sys.number-sequence is matched. When we say forty five sixty seven (4567) or four thousand five hundred and sixty seven (4567), #sys.number is match. This works fine for most of the numbers. But we ran into the following two issues:
When we read "one one one one", none of the two entities is matched. The voice is actually transcribed as "one one one one". But it is not matched to a number sequence of 1111.
When we say "eighty two seventy five", #sys.number is matched, but only 82 is captured. The parameter value is 82 as opposed to 8275.
Appreciate it if someone could shed some light on these issues.
Well this could also be the issue of Speech to Text engine that you are using. But to check the Dialogflow, I built following Entity and Intent and I was able to capture 4 digits easily. I tested using Dialogflow's Mic option for voice commands.
Also, check out the other systems entities that you may use to capture numbers even though they are designed for something else like zipcode.
Hope the following example helps.
Entity
Intent
Related
Using the telephony integration in DialogFlow, when trying to capture an intent like (for example)
I'm looking for the number six
Where six is defined as #sys.cardinal or #sys.number
I would get it to recognize any single digit except 2 & 4.
For those the text would almost consistently read as "to" & "for" respectively.
This would happen both on the phone, and when testing on the Dialogflow console, pressing the little microphone icon and recording the input.
Why is it missing these numbers when it knows I'm expecting a number in that position?
What can I do to give it better hints?
If the exact phrase the user speaks is "I'm looking for the number two" I believe the agent will detect is as a number based on the context of the phrase.
If they just say "two" it may detect as "to" instead.
Will users only be able to provide a single digit here? If so, perhaps you can create an example for every number (given there are only 10 digits that wouldn't be too onerous).
However, if you're expecting the user to provide a string of numbers perhaps try a different data type for the parameter. The number-sequence type might be more suitable.
Okay, I am trying to figure out why dialog flow keeps adding dashes or extra numbers when I call my bot and the telephony. I can say 6 digit number and it either adds a dash of an extra number. I have used all the sys entities and a custom entity and it does this every time. It acts as if it wants a phone number. Is there a fix? And yes I have added definitions of how numbers I want back. The number I was asking for is 813637 but it adds numbers and/or dashes. I can add the screenshot to this to show you what I get back
Thanks
As per the Google documentation: https://dialogflow.com/docs/reference/system-entities
The description of #sys.number says the input is represented as Ordinal and Cardinal numbers.
However, as per my understanding you require it to be a Nominal Number. Also in your case i understand it is a sequence of numbers. In which case you should try using #sys.number-sequence.
Hope this works for you.
Do let us know how it goes in the comments.
I have created an Intent, which outputs a context with a given parameter name, let's say $myParam. The goal of this Intent is to catch a long sequence of numbers. I know there is a #sys.number-sequence entity but, I'm using Italian language and this kind of entity is not available. There is only #sys.number, but the numbers I'm expecting from the user are out of its range.
Under these restrictions, I choosed #sys.any as entity for my parameter $myParam.
Problem
When the user enters the digits, in a real device, the Assistant might add some white spaces between them (while the user says them).
When the Assistant gets the sequence 111 222, the Intent is triggered and everything goes OK.
But, when the Assistant gets the sequence 111222 (note the missing of the white space) it doesn't work.
I was expecting that #sys.any entity catches all inputs but it doesn't look like that.
Do you know how to deal with this case?
My goal is to trigger the intent even when the Assistant catches the sequence of digits without space between, before or after the sequence.
Image:
https://ibb.co/ngBzGtx
I faced this problem in the recent days and it was really annoying. Suddenly, for some reason that I don't know, Assistant's #sys.any entity was not working any more for catching numbers.
My use case is pretty much as yours, I have a parent Intent where I ask the user to enter a code (10-15 digits), and I have created a follow-up intent to handle user's input. I'm using a language different from english, and the only entity that the system offers for catching long numbers is #sys.any.
But it stopped working! I came around to find a way to somehow force the assistant to enter in a specific intent, because not only the follow-up intent isn't triggered now, but the fallback intent either. Assistant just holds on in the parent intent and goes to crash.
After I spent some hours finding nothing useful, I tried this trick which worked for me.
When creating an intent, by default it has a Normal priority. Changing the priority of the follow-up intent, that I want to be triggered with the parameter of entity type #sys.any holding the user's input, to High solved my issue. Now it's working correctly as it used to work before.
The #sys.any entity generally shouldn't be used to cover everything in the phrase. For cases like this, you should be able to use a Fallback Intent and then process the entire input from the user.
I am using "list" entity. However, I do not achieve my expected result.
Here is what I have for LUIS intent:
getAnimal
I want to get a cat**[animal]**.
Here is what I have with LUIS entities:
List Entities [animal]
cat: russian blue, persian cat, british shorthair
dog: bulldog, german shepard, beagle
rabbit: holland lop, american fuzzy lop, florida white
Here is what I have with LUIS Phrase lists:
Phrase lists [animal_phrase]
cat, russian blue, persian cat, british shorthair, dog, bulldog, german shepard, beagle, etc
Desired:
When user enters "I want to get a beagle." It will be match with "getAnimal" intent.
Actual:
When user enters "I want to get a beagle." It will be match with "None" intent.
Please help. Your help will be appreciated.
So using a phrase list is a good way to start, however you need to make sure you provide enough data for LUIS to be able to learn the intents as well as the entities separate from the phrase list. Most likely you need to add more utterances.
Additionally, if your end goal is to have LUIS recognize the getAnimal intent, I would do away with the list entity, and instead use a simple entity to take advantage of LUIS's machine learning, and do so in combination with a phrase list to boost the signal to what an animal may look like.
As the documentation on phrase lists states,
Features help LUIS recognize both intents and entities, but features
are not intents or entities themselves. Instead, features might
provide examples of related terms.
--Features, in machine learning, being a distinguishing trait or attribute of data that your system observes, and what you add to a group/class when using a phrase list
Start by
1. Creating a simple entity called Animal
2. Add more utterances to your getAnimal intent.
Following best practices outlined here, you should include at least 15 utterances per intent. Make sure to include plenty of examples of the Animal entity.
3. Be mindful to include variation in your utterances that are valuable to LUIS's learning (different word order, tense, grammatical correctness, length of utterance and entities themselves). Highly recommend reading this StackOverflow answer I wrote on how to build your app properly get accurate entity detection if you want more elaboration.
above blue highlighted words are tokens labeled to the simple Animal entity
3. Use a phrase list.
Be sure to include values that are not just 1 word long, but 2, 3, and 4 words long in length, as different animal names may possibly be that long in length (e.g. cavalier king charles spaniel, irish setter, english springer spaniel, etc.) I also included 40 animal breed names. Don't be shy about adding Related Values suggested to you into your phrase list.
After training your app to update it with your changes, prosper!
Below "I want a beagle" reaches the proper intent. LUIS will even be able to detect animals that were not entered in the app in entity extraction.
My bot reads and replies in simple mail conversation. More like in chat manner, one or 2 sentences only done through email. My backend is taking care of reading emails, interpreting api.ai responses, storing locally useful data and sending next questions. Before sending to api.ai, messages are split in sentences.
What I've seen from example conversations already done by humans is that the end users are quite often sending several significant information in one sentence. That means that from e.g. 8 possible peaces of information I totally can have (mostly non required) I can get in one sentence any 2 of them.
How to organize that?
I started with one intent for each field I require. But to solve case with any two intents in one sentence, I am extending user says examples with other field too. At the end I will have 8 intents which are actually filled with similar examples.
Now I am thinking to have just one intent and cover all in it. That might work, but the real question is that really way to do it?
Here are example conversations to describe issue better
v1 - simple way like in api.ai examples
- u: Hi. I need notebook bellow $700.
- b: Great. What size should it be?
- u: 17'
- b: I have gaming one at $590 and one professional for $650.
- u: I more to gaming one.
v2 - what I can expect from real life examples
- u: Hi I would like to buy 15 inch gaming laptop.
- great, what price range?
- ...
Api.ai has a feature called slot filling that allows to collect parameter values within a single intent. It's great for building conversational interfaces. You can see if it's compatible with you use case.
Here's how such intent could look like for the examples you provided:
See the "book_notebook" intent:
and how it would work in conversation:
See a test for the "book_notebook" intent: