Adding multiple dialogs to Microsoft botframework dialog stack - azure

According to Microsoft's Botframework Documentation here, by using triggerAction with onSelectAction, you can add dialogs to the top of the stack if a user's utterance includes a matched phrase.
However, if the user's utterance includes TWO matched phrases, how can you add multiple dialogs to the stack?
For example, if a user said...
I want a burger and fries
I would like to add the burgers dialog and the fries dialog to the stack, so we can ask questions about both of them.
I've tried something like this:
bot.dialog('burgers', require('./burgers'))
.triggerAction({
matches: [/burger/i],
onSelectAction: (session, args, next) => {
session.beginDialog(args.action, args);
}
});
bot.dialog('fries', require('./fries'))
.triggerAction({
matches: [/fries/i],
onSelectAction: (session, args, next) => {
session.beginDialog(args.action, args);
}
});
Here's an example of the burgers dialog (the fries dialog is the same):
var builder = require('botbuilder');
var Store = require('./store');
module.exports = [
// Destination
function (session) {
session.send('Burger dialog test');
builder.Prompts.text(session, 'I am just testing the burger dialog');
},
function (session, results, next) {
session.send('Now we should go to the next dialog in the stack', results.response);
session.endDialog();
},
];
However, only one of the dialogs gets invoked... and then it's game over!
Any help is appreciated!

As you've found, only one dialog will be triggered at one time, so as a workaround to trigger multiple dialogs, we can trigger one dialog first and analyses the user input to call different child dialog.
For example:
bot.dialog('addOrder', (session, args)=>{
var text = session.message.text;
var found = text.match(/burger/i);
if(found!=null){
session.beginDialog('burger');
}
var found = text.match(/fries/i);
if(found!=null){
session.beginDialog('fries');
}
}).triggerAction({
matches: [/burger/i, /fries/i]
});
bot.dialog('burger',(session)=>{
session.send("burgers");
//logic of 'burger' dialog
session.endDialog();
});
bot.dialog('fries', (session)=>{
session.send("fries!");
//logic of 'fries' dialog
session.endDialog();
});
As you can see here, we can use a regular expression array to trigger the addOrder dialog first and then call other dialogs inside this addOrder dialog.
Or you may train a LUIS and use it in your bot like this:
const LuisModelUrl = 'YOUR-BOT-ENDPOINT';
var recognizer = new builder.LuisRecognizer(LuisModelUrl);
var intents = new builder.IntentDialog({recognizers:[recognizer]})
.matches('MyOrder',(session, args)=>{
var entities = args.entities;
//handle entities
});
bot.dialog('/',intents);
I create a intent named MyOrder and two entities named MyOrder.Burgers and MyOrder.Fries like this:

Related

Move data in Waterfall-Dialog. Bot Framework SDK

I'm using Bot Framework SDK with nodejs to implement a disamibuation flow.
I want that if two intents predicted by Luis are close to each other, ask the user from which of them are the one they want. I have done the validator but, I have a problem with the flow.
It is a waterfall Dialog with 3 steps:
FirstStep: Calls Orchestrator and Luis to get intents and entities. It pass the data with return await step.next({...})
Disamiguation Step: Checks if it is necessary to disambiguate, and, in that case, prompts the options. If not, it pass the data like the first step.
Answer step: If it has a disambiguation flag in the data it receives in step.result, it prompts the answer acordingly with the user response. Elsewhere, it uses the data in step.result that comes from the first step.
The problem is that, when it prompts user to say the intent, I lost the data of the FirstStep since I cannot use step.next({...})
¿How can I maintain both the data from the first step and the user answer in the prompt?
Here are the basic code:
async firstStep(step) {
logger.info(`FinalAnswer Dialog: firstStep`);
let model_dispatch = await this.bot.get_intent_dispatch(step.context);
let result = await this.bot.dispatchToTopIntentAsync(step.context, model_dispatch.model)
// model_dispatch = orchestrator_model
// result = {topIntent: String, entities: Array, disamibiguation: Array}
return await step.next({ model_dispatch: model_dispatch, result: result})
}
async disambiguationStep(step) {
logger.info(`FinalAnswer Dialog: disambiguationStep`);
if (step.result.result.disambiguation) {
logger.info("We need to disambiguate")
let disambiguation_options = step.result.result.disambiguation
const message_text = "What do you need";
const data = [
{
"title": "TEXT",
"value": disambiguation_option[0]
},
{
"title": "TEXT",
"value": disambiguation_option[1]
},
]
let buttons = data.map(function (d) {
return {
type: ActionTypes.PostBack,
title: d.title,
value: d.value
}
});
const msg = MessageFactory.suggestedActions(buttons, message_text);
return await step.prompt(TEXT_PROMPT, { prompt: msg });
return step.next(step.result) //not working
}
else {
logger.info("We dont desambiguate")
return step.next(step.result)
}
}
async answerStep(step) {
logger.info(`FinalAnswer Dialog: answerStep`);
let model_dispatch = step.result.model_dispatch
let result = step.result.result
//Show answer
return await step.endDialog();
}
You can use the step dictionary to store your values. The complex dialogs sample on GitHub is excellent for demonstrating this. https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/javascript_nodejs/43.complex-dialog/dialogs/topLevelDialog.js
You can save data in the context with whatever name you want:
step.values['nameProperty'] = {}
This will be accessible within the entire execution context of the waterfall dialog:
const data = step.values['nameProperty'] // {}

How to find ActiveDialog (waterfall-step) in context after replacedialog in waterfall dialog

Contextual help in prompts
I need to implement contextual help for a chatbot. My strategy is to use the active prompt as an index for a table with help-textlines. I am struggling with finding the active prompt after a stepContext.replaceDialog() in a waterfall dialog.
I will use the Compex Dialog sample as example.
In reviewSelectionDialog below is a prompt called CHOICE_PROMPT. This is the prompt in which I would like to add contextual help. If the user enters help, the helptext should be shown that is about that prompt.
In the same dialog is a loopstep. Based on a user decision, the dialog is repeated (looped) by the replaceDialog() method.
ReviewSelectionDialog is extended with CancelAndHelpDialog. As a result I am able to check for and act on any user interrupts like 'help'.
In CancelAndHelpDialog I need the active prompt when help was entered by the user so I am able to show relevant help. (CHOICE_PROMPT in this example).
My question
In the first pass of ReviewSelectionDialog, after sending 'help', I am able to get the active prompt in the CancelAndHelpDialog via innerDc.activeDialog.id. But after the stepContext.replaceDialog() in loopStep and sending 'help' again in the CHOICE_PROMPT, innerDc.activeDialog.id shows REVIEW_SELECTION_DIALOG. Where do I find the active prompt after a replace_dialog()?
ReviewSelectionDialog
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
const { ChoicePrompt, WaterfallDialog } = require('botbuilder-dialogs');
const REVIEW_SELECTION_DIALOG = 'REVIEW_SELECTION_DIALOG';
const { CancelAndHelpDialog } = require('./cancelAndHelpDialog');
const CHOICE_PROMPT = 'CHOICE_PROMPT';
const WATERFALL_DIALOG = 'WATERFALL_DIALOG';
class ReviewSelectionDialog extends CancelAndHelpDialog {
constructor() {
super(REVIEW_SELECTION_DIALOG);
// Define a "done" response for the company selection prompt.
this.doneOption = 'done';
// Define value names for values tracked inside the dialogs.
this.companiesSelected = 'value-companiesSelected';
// Define the company choices for the company selection prompt.
this.companyOptions = ['Adatum Corporation', 'Contoso Suites', 'Graphic Design Institute', 'Wide World Importers'];
this.addDialog(new ChoicePrompt(CHOICE_PROMPT));
this.addDialog(new WaterfallDialog(WATERFALL_DIALOG, [
this.selectionStep.bind(this),
this.loopStep.bind(this)
]));
this.initialDialogId = WATERFALL_DIALOG;
}
async selectionStep(stepContext) {
// Continue using the same selection list, if any, from the previous iteration of this dialog.
const list = Array.isArray(stepContext.options) ? stepContext.options : [];
stepContext.values[this.companiesSelected] = list;
// Create a prompt message.
let message = '';
if (list.length === 0) {
message = `Please choose a company to review, or \`${ this.doneOption }\` to finish.`;
} else {
message = `You have selected **${ list[0] }**. You can review an additional company, or choose \`${ this.doneOption }\` to finish.`;
}
// Create the list of options to choose from.
const options = list.length > 0
? this.companyOptions.filter(function(item) { return item !== list[0]; })
: this.companyOptions.slice();
options.push(this.doneOption);
// Prompt the user for a choice.
return await stepContext.prompt(CHOICE_PROMPT, {
prompt: message,
retryPrompt: 'Please choose an option from the list.',
choices: options
});
}
async loopStep(stepContext) {
// Retrieve their selection list, the choice they made, and whether they chose to finish.
const list = stepContext.values[this.companiesSelected];
const choice = stepContext.result;
const done = choice.value === this.doneOption;
if (!done) {
// If they chose a company, add it to the list.
list.push(choice.value);
}
if (done || list.length > 1) {
// If they're done, exit and return their list.
return await stepContext.endDialog(list);
} else {
// Otherwise, repeat this dialog, passing in the list from this iteration.
return await stepContext.replaceDialog(REVIEW_SELECTION_DIALOG, list);
}
}
}
module.exports.ReviewSelectionDialog = ReviewSelectionDialog;
module.exports.REVIEW_SELECTION_DIALOG = REVIEW_SELECTION_DIALOG;
CancelAndHelpDialog
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
const { InputHints } = require('botbuilder');
const { ComponentDialog, DialogTurnStatus } = require('botbuilder-dialogs');
/**
* This base class watches for common phrases like "help" and "cancel" and takes action on them
* BEFORE they reach the normal bot logic.
*/
class CancelAndHelpDialog extends ComponentDialog {
async onContinueDialog(innerDc) {
const result = await this.interrupt(innerDc);
if (result) {
return result;
}
return await super.onContinueDialog(innerDc);
}
async interrupt(innerDc) {
if (innerDc.context.activity.text) {
const text = innerDc.context.activity.text.toLowerCase();
switch (text) {
case 'help':
case '?': {
const helpMessageText = 'Show help about prompt: ' + innerDc.activeDialog.id;
await innerDc.context.sendActivity(helpMessageText, helpMessageText, InputHints.ExpectingInput);
return { status: DialogTurnStatus.waiting };
}
case 'cancel':
case 'quit': {
const cancelMessageText = 'Cancelling...';
await innerDc.context.sendActivity(cancelMessageText, cancelMessageText, InputHints.IgnoringInput);
return await innerDc.cancelAllDialogs();
}
}
}
}
}
module.exports.CancelAndHelpDialog = CancelAndHelpDialog;
I want to thank you for using the sample code because you've actually revealed a bug that I've reported here: https://github.com/microsoft/BotBuilder-Samples/issues/2457
The underlying problem here is that the dialogs library has two ways of stacking dialogs. Ordinarily, one dialog gets stacked on top of another dialog like this:
[ CHOICE_PROMPT ]
[ WATERFALL_DIALOG ]
However, component dialogs form a nested dialog stack that stacks inward rather than further upward:
[ REVIEW_SELECTION_DIALOG ]
[ TOP_LEVEL_DIALOG ]
[ MAIN_DIALOG ]
Since not all dialogs are component dialogs, the two ways combine to look like this:
[ CHOICE_PROMPT ]
[ WATERFALL_DIALOG ]
[ REVIEW_SELECTION_DIALOG ]
[ TOP_LEVEL_DIALOG ]
[ MAIN_DIALOG ]
I want to note that the order of this stack is not necessarily what you'd expect if you're used to writing hierarchical lists that look like this (with the most recently added item on the bottom):
MAIN_DIALOG
TOP_LEVEL_DIALOG
REVIEW_SELECTION_DIALOG
WATERFALL_DIALOG
CHOICE_PROMPT
Some people might not consider the second way of stacking actual stacking, since it's a parent-child relationship and not a stack. The reason I'm calling it a second way of stacking here is because of the conceptual similarity to a dialog stack. When you design your bot's dialogs, you have a choice about whether you want each new dialog to be added on top of the existing dialog stack or be nested as a child in an inner dialog stack. The two ways behave similarly because a component dialog ends when its last child dialog ends, so when you pop a dialog off of a stack the stack unravels outwards in much the same way as it unravels downwards. (Remember that new dialogs get added to the top of the stack so "downwards" here means from newer dialogs back to older dialogs, like the stack diagrams I started with.)
The "active dialog" is the dialog at the top of the stack. Since each component dialog has its own dialog set and dialog state and dialog stack and dialog context, each component dialog has a different idea of what the active dialog is. Because the active dialog is defined in terms of a specific dialog stack, when there are multiple dialog stacks the active dialog depends on who you ask.
This didn't cause a problem for you when you were looking for the active dialog in the innermost component dialog. But then you replaced that component dialog's child with the component dialog itself. After that, your (full) stack looked like this:
[ CHOICE_PROMPT ]
[ WATERFALL_DIALOG ]
[ REVIEW_SELECTION_DIALOG ]
[ REVIEW_SELECTION_DIALOG ]
[ TOP_LEVEL_DIALOG ]
[ MAIN_DIALOG ]
When your CancelAndHelpDialog tried to access the active dialog of its inner dialog context, it correctly returned a ReviewSelectionDialog because that was the only dialog on its stack. You wanted to return the choice prompt but that choice prompt was in the dialog stack of the child ReviewSelectionDialog and not the parent ReviewSelectionDialog.
The bug is that you should be replacing the waterfall dialog with itself rather than with the parent component dialog. So it could look like this:
return await stepContext.replaceDialog(WATERFALL_DIALOG, list);
Or like this:
return await stepContext.replaceDialog(this.initialDialogId, list);
Ultimately, this still hasn't answered a question that you may have meant to ask. Since you've seen that problems can arise when you get the active dialog in an intermediate dialog context, you may want a way to get the "real" innermost active dialog. This can be accomplished with some simple recursion:
function getInnermostActiveDialog(dc) {
const child = dc.child;
return child ? getInnermostActiveDialog(child) : dc.activeDialog;
}

dialogflow with node.js how do you swtich intents

I have an intent that has a required parameter and a fulfilment. When you answer it takes you to my node.js application which currently looks like this:
app.intent(QUESTIONS_INTENT, (conv, params) => {
let data = conv.data;
let categoryId = data.categoryId;
let formulas = data.formulas;
let options = {
'method': 'PUT',
'url': apiUrl + 'questions/filter',
'body': {
categoryId: categoryId,
organisationId: organisation,
formulas: formulas
},
'json': true
};
return request(options).then(response => {
// We need to change how we get these
var questionText = params.questions;
var questions = response.filter(item => item.text === questionText);
data.questions = questions;
conv.ask(questions[0].text);
conv.contexts.set('colour');
}, error => {
conv.ask(JSON.stringify(error));
});
})
Currently it gets the questionText and finds the question that matches what you said. What I want it to do is to swap to a new intent. I have tried by conv.contexts.set('colour') but that doesn't appear to work.
I have an intent setup with input context as "Colour" so I would expect that when my fulfilment completes, it should swap to that intent, but it doesn't.
Can someone help me with this?
You need to make sure that you have a colour intent setup in DialogFlow that will handle the input from the user and respond to them. The question should be asked after you set the context.
return request(options).then(response => {
// We need to change how we get these
var questionText = params.questions;
var questions = response.filter(item => item.text === questionText);
data.questions = questions;
// Sets the context to colour.
conv.contexts.set('colour', 1);
// Now that the context is in control you'll be able to ask a question.
conv.ask(questions[0].text);
You'll also want to provide a lifespan after the context arg. This will make the context active for that many prompts, in the example above I set it for one so your question context will only be active for that specific response.
conv.contexts.set('colour', 1);

Botframework Prompt dialogs until user finishes

I'm creating a chat bot for slack using Microsoft's botbuilder and LUIS.
Is there a way to keep using builder.Prompts.text() to keep asking the user if there are anymore information the user wants to put, like a for or while loop? For example I want to keep on asking the user an undefined number of times if there's a key the user wants to save and only stop when the user types done and then I will have an equal number of builder.Prompts.text() to ask the user for the values to put in each of those keys.
function (session, results, next) {
builder.Prompts.text(session, "Another key to put?");
},
function (session, results, next) {
builder.Prompts.text(session, "Value to put?");
}
It doesn't seem like I can create some sort of loop with an array that saves each key with its value, I'm not sure how to approach this.
Thanks.
What you're looking for is session.replaceDialog(); there is an example labeled 'basics-loops' on the GitHub repo for the SDK. To loop through prompts, one has to create a small dialog with the desired prompts and have the dialog restart automatically via session.replaceDialog() or session.beginDialog().
I've built a chatbot that receives key-value pairs in the scenario you specified above. The code excerpt below is the final step in my 'Loop' dialog.
function (session, results) {
var value = results.response ? results.response : null,
key = session.dialogData.key;
var pairs = session.userData.kVPairs;
var newPair = {};
newPair[key] = value;
if (key && value) {
session.userData.kVPairs.push(newPair);
console.log(pairs[pairs.length - 1]);
}
session.send('latest key-value pair added, { %s : %s }', key, value);
session.replaceDialog('Loop');
}
session.replaceDialog('Loop') is incorporated at the end of this waterfall step and takes the Id of the new dialog. The method can also take optional arguments to pass to the new dialog.
Note: While not applicable here, the difference between replaceDialog and beginDialog/endDialog is semi-obvious, when you use beginDialog, the new dialog is added to the stack. When you end that child dialog, you will be returned to the original/parent dialog. replaceDialog will end the current dialog and begin the new one.
You may use replacedialog to loop the user:
bot.dialog("/getUserKeys", [
function (session, args, next) {
session.dialogData.keys = args && args.keys ? args.keys : [];
builder.Prompts.text(session, "Another key to put?");
},
function (session, results, next) {
if (results.response === "none") {
session.endDialogWithResult({response: { keys: session.DialogData.keys }});
return;
}
session.dialogData.keys[session.dialogData.keys.length] = results.response;
session.replaceDialog("/getUserKeys", { keys: session.DialogData.keys });
}
]);

how to stop bot to not move forward unless entity is resolves

var intent = args.intent;
var number = builder.EntityRecognizer.findEntity(intent.entities, 'builtin.numer');
when i use findentity it move forward if the answer is correct or not how can i use entity resolve on that which are not builtin entites
var location1 = builder.EntityRecognizer.findEntity(intent.entities, 'Location');
var time = builder.EntityRecognizer.resolveTime(intent.entities);
when i use resolve time it ask againand again unless entity is resolve;
var alarm = session.dialogData.alarm = {
number: number ? number.entity : null,
timestamp: time ? time.getTime() : null,
location1: location1? location1.entity :null
};
/* if (!number & !location1 time)
{} */
// Prompt for number
if (!alarm.number) {
builder.Prompts.text(session, 'how many people you are');
} else {
next();
}
},
function (session, results, next) {
var alarm = session.dialogData.alarm;
if (results.response) {
alarm.number = results.response;
}
I believe I've already answered this question on StackOverflow: "Botframework Prompt dialogs until user finishes".
You'll need to create a mini-dialog, that will have at least two waterfall steps. Your first step will take any args and check/set them as the potential value your chatbot is waiting for. It'll prompt the user to verify that these are the correct values. If no args were passed in, or the data was not valid, the user will be prompted to supply the value the chatbot is waiting for.
The second step will take the user's response to the first step and either set the value into a session data object (like session.userData or session.conversationData) or restart the dialog using session.replaceDialog() or session.beginDialog().
In your main dialog you'll modify the step where you employ your EntityRecognizers to include an if-statement that begins your mini-dialog. To trigger the if-statement, you could use the same design as shown in this GitHub example or in your code. This code might look like below:
var location1 = builder.EntityRecognizer.findEntity(intent.entities, 'Location');
session.userData.location1 = location1 ? location1.entity : null;
if(!session.userData.location1) {
session.beginDialog('<get-location-dialog>');
}

Resources