actions on google text-to-speech errors for numbers - dialogflow-es

I am making an Action for the Assistant. Part of the functionality is setting the channel of my tv. Example, I can say "favorite 1" to set my tv to favorite channel number 1.
My problem is that when I utter "favorite 2", the text-to-speech of Google Assistant seems to convert this input to "favorite to". This is also true for other numbers, ie. "3" is converted as "tree"; "4" is converted to "for"; "8" is converted to "ate"
How do I go about this? Should I make an entity for numbers and add those erroneous conversions as synonyms? Is there a more appropriate solution for this?

try defining the channel number as a parameter and use "#sys.number" as entity type for this parameter. then in your training phrases you can use this parameter

Related

Why dialogflow cant recognize CNY?

enter image description here
Why dialogflow cant recognize CNY?
I don't know Chinese language, I dare giving the following answer based on my observations.
In the image, a single training phrase has two parameters with same name with two different values for #sys.currency-name, what is strange to me is that both parameters are highlighted with purple. I think a parameter name was changed to have the same name than the other (which is not correct due to they have different values). By having the same parameter name with two different values I think Dialogflow will decide which one to return or combine.
Please try to change the parameter names with currency-name-1 and currency-name-2. Most probably Dialogflow will return each value (USD and CNY) in a different parameter. I think this should fix the behavior because #sys.currency-name is an ISO 4217 string where CNY is already included.

Select multiple options from suggestion in dialogflow

I am building a chatbot using dialogflow and I want that a user will be able to select multiple responses from Suggestions.
Is there any way to get it done using dialogflow fulfillment or something else? Or is there some other alternative to implement this?
I had the same issue, and my finding is that we can't select multiple suggestion chips.
but if you want to select multiple items from user in single intent then you can use "IS LIST" option in actions and parameters.
suppose you have an entity "fruits" having values {apple, orange, mango etc}
bot: what would you like to eat
user: I would like to have apple and mango
for this add training phrase "I would like to have apple and orange"
and in actions and parameters select "IS LIST" against "fruits" entity
Suggestion Chips are meant to be hints or ways to pivot the conversation.
Depending on your use case you can guide your user through different options. For example, if they are selecting a shirt, you can first ask for the color and then the size -- separating the different options to smaller subsets. A different option/example is for selecting music style (where one can choose one or several options) can be ask them to tell you the style of music they like (while providing 5 suggestion chips) and then in your response you confirm the styles and allow them to add more (while providing 4 suggestion chips of music styles and another that says "All done") --- I think I would also use this design to implement ranking of their preferred music.

Dialogflow matches irrelevant phrases to existing intents

I created a chatbot which informs the user about the names of the members of my (extended) family and about where they are the living. I have created a small database with MySQL which has these data stored and I fetch them with a PHP script whenever this is appropriate depending on the interaction of the user with the chatbot.
For this reason, I have created two intents additionally to the Default Fallback Intent and to the Default Welcome Intent:
Names
Location_context
The first intent ('Names') is trained by phrases such as 'What is the name of your uncle?' and has an output context. The second intent ('Location_context') is trained by phrases such as 'Where is he living?', 'Where is he based?', 'Where is he located?' 'Which city does he live in?' etc and has an input context (from 'Names').
In general, this basic chatbot works well for what it is made for. However, my problem is that (after the 'Names' intent is triggered) if you ask something nonsensical such as 'Where is he snowing?' then the chatbot will trigger the 'Location_context' intent and the chatbot will respond (as it is defined) that 'Your uncle is living in New York'. Also let me mention that as I have structured the chatbot so far this kind of responses are getting a score higher than 0.75 which is pretty high.
How can I make my chatbot to trigger the Default Fallback Intent in these nonsensical questions (or even in more reasonable questions such as 'Where is he eating?' which are not however exactly related with the 'Location context' intent) and not trigger intents such as the 'Location_context' which simply contain some similar keywords to it such as the word 'Where'?
Try playing around with ML CLASSIFICATION THRESHOLD in your agent settings (Settings > ML Settings). By default it comes with a very low score (0.2), which is a little aggressive.
Define the threshold value for the confidence score. If the returned
value is less than the threshold value, then a fallback intent will be
triggered or, if there is no fallback intents defined, no intent will
be triggered.
You can see the score for your query in the JSON response:
{
"source": "agent",
"resolvedQuery": "Which city does he live at?",
"metadata": {
"intentId": "...",
"intentName": "Location_context"
},
"fulfillment": {
"speech": "Your uncle is living in New York",
"messages": [{
"type": 0,
"speech": "Your uncle is living in New York"
}]
},
"score": 0.9
}
Compare the scores between the right and wrong matches and you will have a good idea of which confident score is the right one for your agent.
After changing this settings, let it train, try again, and adjust it until it meets your needs.
Update
For queries that still will get a high score, like Where is he cooking?, you could add another intent, custom fallback, to handle those false positives, maybe with a custom entity: NonLocationActions, and use the template mode (#) in user expressions.
where is he #NonLocationActions:NonLocationActions
which city does he #NonLocationActions:NonLocationActions
So these queries will get 1 score in the new custom fallback, instead of getting 0.7 in the location intent.
I am working on a chatbot using dialogflow and am getting similar problems.
Our test manager invented the 'Sausage Test' where she replaces certain words in the question with the word sausage and our bot fell apart! Even with a threshold of 0.8 we still regularly hit issues where intents fire for nonsensical sentences, and with an enterprise level chatbot that is giving out product installation advice we could not afford to get it this wrong.
We found that in some cases we were getting max confidence levels (1) for clearly dodgy 'sausaged' input.
The way we have got round this issue is to back all the answers onto an API and use the confidence score in conjunction with other tests. For example we have introduced Regular Expression tests to check for keywords in the question, together with parameter matching (making sure that key entity parameters were also being passed through in the data from DialogFlow).
More recently we have also started to include in the reply a Low Confidence sentence at the start of the reply i.e. 'I think you are asking about XYZ, but if not please rephrase your question. Here is your answer'. We do this when all our extra tests fail and we have a threshold between 0.8 and 0.98.

Constructing Date-Periods Using "Since" in Api.ai

I am building a google-assistant application with api.ai that delivers data that has been aggregated over a date-period via a webhook.
It is common for people to ask for date periods using the word "since", for instance:
"What is the data since last monday" (tuesday - now)
or the even trickier:
"What is the data since last year". (ambiguous reference to date-period)
Can api.ai parse these date-periods, or is it necessary to identify if the intent request is of a special "relative" type and then construct the date-period manually?
You will probably want to use something like the #sys.date-period pre-defined entity.
For example, if you create an Intent with a "User says" with parameters such as:
and a response:
and then enter in some queries like:
These might not be exactly what you need, so you may need to craft more of you own. If so, check out the #sys.date pre-defined entity, which may do some of the work for you, and the complete list at https://docs.api.ai/docs/concept-entities#section-date-and-time

How to find out when user has enter wrong input when chatting with bot?

i am developing one Bot Framework related application in that i am showing like near by places for that user enter like this way "show me nearby places" here i am pass the key value "places" to google API and its producing the exact results, But here my question is when user enter wrong input like "show nearby places" and show places nearby me" at this time i want to show message "please enter correct input" for this how to show the user friendly message. please give the proper suggestion for me.
Thanks in advance.
You will have to use a NLP tool such as wit.ai, luis.ai or api.ai. The jury is out on which is the best tool so my advice will be to try out all and see for yourself.
You will essentially define stories and tell the NLP engine what the components of a statement are. So if you pass a statement to the NLP engine, it will parse the intents and objects to you.
For example your statement is "show me places nearby". Set your intent as 'nearby' and your entity as 'wit/location'. The tool should recognize variants of the above statement.
You can check out the recipe wit.ai have created for it here.
Else if you want just a string matching mechanism, check if your user's message has the substring 'location' and then show nearby places. Check out gupshup.io which has a Bot Builder that allows you to do this easily. (disclosure: I work there)

Resources