I created a chatbot which informs the user about the names of the members of my (extended) family and about where they are the living. I have created a small database with MySQL which has these data stored and I fetch them with a PHP script whenever this is appropriate depending on the interaction of the user with the chatbot.
For this reason, I have created two intents additionally to the Default Fallback Intent and to the Default Welcome Intent:
Names
Location_context
The first intent ('Names') is trained by phrases such as 'What is the name of your uncle?' and has an output context. The second intent ('Location_context') is trained by phrases such as 'Where is he living?', 'Where is he based?', 'Where is he located?' 'Which city does he live in?' etc and has an input context (from 'Names').
In general, this basic chatbot works well for what it is made for. However, my problem is that (after the 'Names' intent is triggered) if you ask something nonsensical such as 'Where is he snowing?' then the chatbot will trigger the 'Location_context' intent and the chatbot will respond (as it is defined) that 'Your uncle is living in New York'. Also let me mention that as I have structured the chatbot so far this kind of responses are getting a score higher than 0.75 which is pretty high.
How can I make my chatbot to trigger the Default Fallback Intent in these nonsensical questions (or even in more reasonable questions such as 'Where is he eating?' which are not however exactly related with the 'Location context' intent) and not trigger intents such as the 'Location_context' which simply contain some similar keywords to it such as the word 'Where'?
Try playing around with ML CLASSIFICATION THRESHOLD in your agent settings (Settings > ML Settings). By default it comes with a very low score (0.2), which is a little aggressive.
Define the threshold value for the confidence score. If the returned
value is less than the threshold value, then a fallback intent will be
triggered or, if there is no fallback intents defined, no intent will
be triggered.
You can see the score for your query in the JSON response:
{
"source": "agent",
"resolvedQuery": "Which city does he live at?",
"metadata": {
"intentId": "...",
"intentName": "Location_context"
},
"fulfillment": {
"speech": "Your uncle is living in New York",
"messages": [{
"type": 0,
"speech": "Your uncle is living in New York"
}]
},
"score": 0.9
}
Compare the scores between the right and wrong matches and you will have a good idea of which confident score is the right one for your agent.
After changing this settings, let it train, try again, and adjust it until it meets your needs.
Update
For queries that still will get a high score, like Where is he cooking?, you could add another intent, custom fallback, to handle those false positives, maybe with a custom entity: NonLocationActions, and use the template mode (#) in user expressions.
where is he #NonLocationActions:NonLocationActions
which city does he #NonLocationActions:NonLocationActions
So these queries will get 1 score in the new custom fallback, instead of getting 0.7 in the location intent.
I am working on a chatbot using dialogflow and am getting similar problems.
Our test manager invented the 'Sausage Test' where she replaces certain words in the question with the word sausage and our bot fell apart! Even with a threshold of 0.8 we still regularly hit issues where intents fire for nonsensical sentences, and with an enterprise level chatbot that is giving out product installation advice we could not afford to get it this wrong.
We found that in some cases we were getting max confidence levels (1) for clearly dodgy 'sausaged' input.
The way we have got round this issue is to back all the answers onto an API and use the confidence score in conjunction with other tests. For example we have introduced Regular Expression tests to check for keywords in the question, together with parameter matching (making sure that key entity parameters were also being passed through in the data from DialogFlow).
More recently we have also started to include in the reply a Low Confidence sentence at the start of the reply i.e. 'I think you are asking about XYZ, but if not please rephrase your question. Here is your answer'. We do this when all our extra tests fail and we have a threshold between 0.8 and 0.98.
Related
I am trying to develop an Alexa skill where Alexa asks the user for a single word (and than uses the word in sentence).
The user should be able to response with just the word, without any phrase around it. The word can be any word found in the dictionary.
So I am trying to create Intent with an Utterance like this:
{word}
The first question is: What to use for the {word} slot. There is AMAZON.SearchQuery which is for phrases, not for words, but maybe that is good enough.
Unfortunately, when I try to build the model I get:
Sample utterance "{word}" in intent "GetNextWordIntent" must include a carrier phrase. Sample intent utterances with phrase types cannot consist of only slots.
So I really need a phrase around the slot, which is not what I want.
How can I create an Intent (or do it some other way) to ask the user for a single word?
I found this project: https://github.com/rubyrocks/alexa-backwardsword which claims to be a skill, that asks the user for a word and says it backward. Unfortunately the project does not really explain how it deploys itself and how it works in detail.
You can't use AMAZON.SearchQuery intent with a "variable only" as utterance.
You can use it with other slot type.
Why?
Because it will enter in conflict with ALL your intents.
{
"name": "ResponseIntent",
"samples": ["{response}"]
},
{
"name": "QuestionIntent",
"samples": ["play a new question"]
},
When a user wants to invoke other intents, it will work occasionally and route them to the ResponseIntent most of the time. Because the response is a searchQuery and it can be anything.
What if the user just want to quit your skill at the same time?
User: Alexa, stop
Alexa: That's not the correct response
User: Alexa, quit!
Alexa: That's not the right response!
User is frustrated.
It will generate friction. That's why it requires other words.
Creating an Alexa skill requires thinking differently.
It is not a web application, or a voicemail and it can be quite challenging sometime.
There is no button to press to interact with your skill.
A skill is not a one path direction. Yet, the user can do whatever he wants at anytime: ask for help, invoke other intents, quit your skill, ...
What you can do is, based on the context, provide a specific slot type. For example, if you expect the word to be an animal, then you can use a variable only as utterance:
"{animal}",
"the {animal}"
If you use the AMAZON.Animal slot type.
There are plenty of slot type available and you can also extend one or create your own slot type with the values you expect. (or even create a dynamic one)
I'm trying to create a custom action through Google Assistant. I have custom user data which is defined by the user and I want the user to ask me something about this data, identifying which data they want to know about by supplying it's name.
ex:
User says "Tell me about Fred"
Assistant replies with "Fred is red"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
The problem I'm having is how to add a Training phrases or re-prompting for the user to use when they supply a name which doesn't exist.
ex:
User says "Tell me about Greg"
Assistant replies with "I couldn't find 'Greg'. Who would you like to know about?"
[
{
"name":"Fred",
"info":"Fred is red"
}
]
I've tried adding a Training response which only contains the 'name' parameter, but then if the user says "Tell me about Fred", the "name" parameter is set to "Tell me about Fred" instead of just "Fred", which means it ignores other Training responses I have setup.
Anyone out there who can be my Obi-wan Kenobi?
Edit:
I've used Alexa for this same project and have sent to Alexa an elicitSlot directive. Can something similar be implemented?
There is no real equivalent to an elicitSlot directive in this case (at least not the way I usually see it used), but it does provide several tools for accomplishing what you're trying to do.
The general approach is that, when sending your reply, you also set an Output Context with the reply. You can set as parameters for the Context any information that you want to retain (what value you're prompting for and possibly other state you've already collected).
Then you can have Intents that have this context set as an Input Context. The Intent will then only be matched if the Context is active. This Intent can match #sys.any, or whatever other Entity type might be appropriate in this case.
One advantage of this approach is that it allows for users to reply more conversationally, or pivot their reply away from the prompting question you've just asked. It allows for users to answer within the Context, or through other Intents that you've already setup for other purposes.
My LUIS model used to return all intent scores when queried. However, now it only returns the top intent, entities and sentiment analysis.
I have "Include all predicted intent scores" checked on and I publish and still am only getting back topScoringIntent, entities, and sentimentAnalysis.
We're working on a new feature that is going to take advantage of the score breakdown of all intents and this is blocking us.
"Include all predicted intent scores" is merely UI setting by -> you may see changing the url value in "Keys and Endpoints" section in column "Endpoint" as you can see below.
If you would like to have all intents and scores you need to use following url when executing http GET towards LUIS:
https://yourlocation.api.cognitive.microsoft.com/luis/v2.0/apps/yourAppId?subscription-key=yourSubscriptionKey&verbose=true&timezoneOffset=-360&q=Test
test
If you omit
verbose=true
in the query string or set its value to false then you get only topScoringIntent and eventually sentiment if its detection is turned on.
I have been reading about Dialog Flow and there is one thing that is still unclear for me. I'll try to give an example.
I want to implement a conversion as following:
User: Hello Google, what are some interesting cities?
Bot: Hello there! Sydney, New York and Berlin are nice.
User: Could you tell more about the second city?
Bot: Sure. New York is amazing. In New York, you can ...
As you see, I am building a data context. After the first question, we should remember that we answered Sydney, New York and Berlin, so we understand what the second city actually means in the second question.
Should we store this data in the webhook service or is this stored in a context in Dialog Flow? If we have to store such data in the webhook service, how can we distinguish between different ongoing conversations?
Storing it in a Dialogflow Context is an ideal solution - this is exactly what Contexts were made for! You phrased your question using the same term, and this is no coincidence.
Conceptually, you might do this with a setup like this:
User: What are some interesting cities?
Dialogflow sees no contexts and matches an Intent asking for cities.
Agent replies: Sydney, New York, and Berlin are nice.
Agent sets context "cities" with parameter "cities" -> "Sydney, New York, Berlin"
User: Tell me more about the second one?
Dialogflow has an Intent that expects an incoming context of "cities" with a text pattern like "Tell me more about the (number index) one?" It sends the request to that Intent along with the currently active contexts.
Agent get a parameter with the index and the context "cities". It looks up the parameter for it, turns the string into an array, and gets the city based on the index.
Agent replies: New York is a fun place to visit!
Agent sets context "city" with parameter "current" -> "New York"
User: Tell me more!
Dialogflow matches this phrase and that the "city" context is still active and sends it to an event that reports more.
Agent says: More awesome stuff about New York.
User: Tell me about that first city instead.
Dialogflow matches it against the same intent as before.
Agent says: Sydney is pretty cool.
Agent changes the "city" context so the parameter "current" -> "Sydney" and "previous" -> "New York".
You can now create other intents that handle phrases like "Compare these two" or "tell me more about the other one".
Update
This setup strikes a good balance between what Dialogflow does well (parse messages and determine the current state of the conversation) and what your webhook does well (determine the best answers to those questions).
You could probably do much of that inside Dialogflow, but it would start to get very very messy very quickly. You would need to create multiple Intents to handle the results from each value individually, which doesn't scale. You'd also need to create a Context for each city (so you'd have a "city_ny" and "city_sydney" Context), since you can only match on the presence of a Context, not the parameters it might have.
Using the webhook (even the built-in fulfillment system that we now have) will likely work much better.
So I got a bot built with Microsoft Bot Framework and it's using the LUIS API for text recognition. With this bot, I'm able to ask about information about different devices that I got in my backend. They got names like Desk, Desk 2 and Phone Booth 4. The first and second name works just fine but whenever I send a name that contains 2 spaces or more, LUIS will fail to recognize it. I have added all the names to a feature list on LUIS but it doesn't seem to do anything. When I'm in the bot code executes the method for that intent, the entity is just null whenever I send this kind of names. Any idea how I might solve this? As I described, names with just one space like Desk 2 works just fine. Maybe there is a way to save multiple words as an entity inside LUIS?
In the image below, the top entry is "show me phone booth 4" and the bottom one "show me desk 2".
It'll take a little leg work, but have you tried updating your model programmatically?
On the LUIS API reference, you can label individual utterances or do it in batches. The benefit of doing it this way is that you can select what should be recognized as an entity based on index position.
Example:
{
"text": "Book me a flight from Cairo to Redmond next Thursday",
"intentName": "BookFlight",
"entityLabels":
[
{
"entityName": "Location::From",
"startCharIndex": 22,
"endCharIndex": 26
},
{
"entityName": "Location::To",
"startCharIndex": 31,
"endCharIndex": 37
}
]
}
I admit I haven't attempted to do this before, but I do not see how labeling/training this way would logically fail.
One thing I do note about your entities is that they're composed of an item and also a number. You could throw them into a composite entity; but in this case doing it the way I mentioned above is a good way to do what you're looking for.
That said, if you plan on using the office-furniture-pieces(?) as entities for a separate intent, say, 'PurchaseNewOfficePieces', it might pay to create use a composite entity for 'Desk 2' and 'Phone Booth 4'.