After going through the documentation, I have found Error in actions where we can define errors, but what if the utterance is not specified in the training. How we will handle fallback?
In order to handle unknown inputs, what I did was create a series of dialogs that matched against the inputs that were missing. Below is one of them. Try playing around with it.
dialog (Elicitation) {
match: modelType
template("I didn't understand what type you were trying to say..")
}
Related
I am having a bot set up the beginnings of a game. A human inputs the command of /startbrawl to initiate the setup of the game (creating the deck objects). But the two players need to be identified first. I have a message sent from another command that says "Player A is #[username A]. Player B is #[username B]." in the channel this game is happening in. I want the bot from this new command to look at the first message sent in the channel, which is always the "Player A is etc..." message (and is always sent by the bot) and pull both usernames from it in order to specify for this new command who is player A and who is player B. The code I have most recently (after trying multiple things) is this:
if (userInput.startsWith("!startbrawl") === true) {
message.channel.fetchMessages().then(messages => {
const botMessages = messages.filter(message => message.author.bot);
console.log(botMessages.mentions.members.first()) //this will be Player A. I'd repeat same for Player B with .last instead.
}
}
This gives me an error:
(node:15368) UnhandledPromiseRejectionWarning: TypeError: Cannot read
property 'first' of undefined.
I have made the last line be console.log(botMessages) to get all the info about the messages the filter finds. But trying to extract only part of it gives issues about not being defined, or just a result of undefined with no errors. Either way, something isn't working how I think I need it to.
The only other thing I've debated trying is exporting variables from the command prior to this new command. Player A and Player B are defined in the command used to make the channel that this new command is then used in. However, I've never had luck with exporting variables when I've used it in other instances. I use a command handler, so I'm not sure if that affects how exporting variables works... Which method would work best to set up the card game? I'm a novice in general just figuring things out as I go, so some advice (beyond take a course, look up basics, etc) is greatly appreciated. I've taken an online course for javascript and I work best figuring things out first hand.
Thanks for the help in advance!
As botMessages is a collection you wanna get a Message object out of it by doing
botMessages.first()
So try logging something like
botMessages.first().mentions.members.first()
For example if you have IntentA and you add 2 followup intents: IntentB, IntentC, it works ok it should add a context because it doesn't have an output context yet. But here is the problem. Sometimes if you add another one, for example a FallbackIntent, it just adds another context (SOMETIMES) and if you delete it in both(IntentA and FallbackIntent), so they both have the same context, meaning they should still be connected, and the hiearchy shouldn't change,but it still does. It still works perfectly, but still this is a wierd behavior. Any ideas why this happens and how to fix it?
Intent A
Intent B
Fallback
The best way to resolve this issue and organize the structure of your dialogflow agent is to upload the intents using create_intent() function of dialogflow api.
You can give the root intent as parent_followup_intent_name, and all the intents having this root intent will fall under same intent. Note that you will need to give root intent ID not the name.
You can read more about create_intent api using python sdk.
intents_client = dialogflow.IntentsClient()
intent = dialogflow.types.Intent(
display_name=display_name,
training_phrases=training_phrases_parts,
messages=response,
input_context_names=input_contexts,
output_contexts = output_context_list,
parent_followup_intent_name=root_intent,
)
intents_client.create_intent(parent, intent)
EDIT:
As requested, here's 2nd and easier way of doing this without any prograpping knowledge.
Suppose your agent looks like below screenshot before, and you want
to group intents under how to solve intent
Go to Setting -> Export and Import -> Export as zip the agent
Once exported, unzip the files and go to intents folder. Your files will look something like below screenshot
Open how to solve.json file and copy the id of this intent
Open all the json files which you want to group under how to solve
intent (note we have to open the files which do not have
_usersays_en as they only contain user utterances
Paste the id of how to solve intent as parentId in these json
files like below screenshot (in this case intent id of how to solve intent was b2131b0e-f86d-429d-957c-65c070ddd5df)
Once all the changes have been made, then zip the directory
Again go to Setting -> Export and Import -> Restore from
zip and select the zip file you have just created
Intent will look like below screenshot once the process is complete
Hope it helps.
#sid8491 - this is absolutely ingenious :)
Thanks for that! Works like a charm and I can confirm that this is just a visual representation. No need to worry about changing your code.
Just a small addition: When you already have follow-up intents, they already carry
"id": "70a48f63-662b-48d4-9a78-dd0af3e0db87",
"parentId": "5a1b5861-fadc-480e-b03b-11bc034df8b9",
"rootParentId": "6c9cb1d6-3efb-4bac-b768-ae3265faa7b6",
Make sure to adjust rootParentId to the aforementioned id of the root-intent, leave parentId intact and you're all set. Didn't try with a follow-up/follow-up/follow-up etc. structure but I'd say it will follow the same pattern somehow.
I have a DialogFlow intent follow up that I'm having a hard time with. It's the only follow up to my main intent, and the issue I'm having is that when
the incidents.data array is empty it doesn't trigger the conv.ask statement in the else case and causes DialogFlow to throw an empty speech response error. The code looks something like this:
app.intent('metro_timetable - yes', async (conv: any) => {
const incidents = await serviceIncidents.getIncidents();
if (incidents.data.length > 0) {
conv.ask('I have incidents')
} else {
conv.ask(
`I wasn't able to understand your request, could you please say that again?`
);
}
});
incidents.data gets stored in the global scope, and is set deep within
the metro_timetable intent. It stores an incident for the follow up. Because all yes responses trigger the follow up I setup an else case so it catches it if someone says yes when metro_timetable doesn't understand their original request and asks them to repeat it. If incidents.data actually has information to share the dialog triggers correctly and I have incidents is correctly read to the user.
In DialogFlow it looks something like this. Where am I going wrong here?
Your description is a little convoluted how incidents.data actually gets set, but it sounds possible that instead of it being set to an empty array, it isn't set at all. In this case, I suspect that the following happened:
incidents.data would be undefined
Trying to evaluate incidents.data.length would cause an error
Since the program crashes, your webhook doesn't return a result. Since you probably didn't set a result in the UI for the intent, an empty result was returned.
You can probably solve this by doing a test such as (for example)
incidents && incidents.data && incidents.data.length > 0
Your other issue, however, seems to be that you have a Followup Intent set for a scenario where you don't actually want that as the followup. This is one of the reasons you probably shouldn't use Followup Intents but, instead, only set a context when you send a response where that context would make sense, and look for the "Yes" response in the context you define. Then, when metro_timetable doesn't understand the request, you don't set the context and you give an error.
To do this, you would remove the automatically generated metro_timetable-followup context from the two Intents. You'll create your own context, which I'll name timetable for purposes of this example.
In the fulfillment for the metro_timetable Intent, if you respond with something that needs confirmation (ie - when "yes" will be something the user says), you would set the timetable context with something like
conv.contexts.set('timetable',2);
conv.ask('Are you sure?');
You can then create an Intent that checks for timetable as the Incoming Context and has training phrases that are equivalent to "yes". In that Intent, you'd do what you need to and respond.
Hi I have made an entity called answers in dialogflow, this entity contains all the answers to my questions for my quiz game.
I get the questions from my database and then check to see if the given answer is correct.
app.intent('answer-question', (conv, {answer})=> {
if(answer == ((conv.data.answers)[0])){
//stuff}
else{
conv.close('you lose');
}
});
However, this function only works when the user gets the answer correct. If the user answers the question incorrectly, then I get the following error:
"Question Master isn't responding right now. Try again soon."
MalformedResponse
'final_response' must be set.
So my question is, how can i cater for the infinite selection of wrong answers a user might give?
Cheers!
You should handle that in a fallback intent. A new Dialogflow agent comes with a default: https://dialogflow.com/docs/intents/default-intents#default_fallback_intent
You should also consider using contexts, so the fallback intent knows that you are expecting an answer and provide a different response when an answer isn't expected.
API.ai's prebuild packages allow you to easily get long lists of intents. Currently I'm trying to make use of their smalltalk package, which has at about 100 intents, and response to each.
I am making use of the api-ai-recognizer package to listen for intents. That works well, but now I have to match those intents, so that I can define the dialog (which is nothing more than using the fulfillment). And this is where I am having trouble.
intents = IntentDialog({recognizers: [apiairecognizer(CLIENT_TOKEN)]})
intents.matches('smalltalk', smalltalk_handler) // No luck
intents.matches(/smalltalk/, smalltalk_handler) // No luck
intents.onDefault(default_handler)
In the default_handler I capture the args:
{"score":1,
"intent":"smalltalk.greetings.how_are_you",
"entities": [
{
"entity":"Lovely, thanks.",
"type":"fulfillment",
"startIndex":-1,
"endIndex":-1,
"score":1
},
{
"entity":false,
"type":"actionIncomplete",
"startIndex":-1,
"endIndex":-1,
"score":1
}
]}
This makes sense according to the documentation of how matches works.
But that does mean that I don't know how to actually use the full list of intents, without explicitly copying every single intent in.
Just to clarify, if I use the exact intent:
intents.matches('smalltalk.greetings.how_are_you', smalltalk_handler)
I receive the nice response: Lovely, thanks.
Any suggestions?
So far, the only thing I have come up with is to modify the api-ai-recognizer such that it will return only smalltalk as intent, whenever it encounters a version of it. This way the intent dialog only needs to recognize one intent. Because they are handled in the same way, it doesn't matter at this point.