How to add image of the bot with some welcome text in the middle in Microsoft Bot Framework Web Chat. Seems like quite common functionality and I see images which indicates that is possible.
Anyone knows how to add it?
you can use the below code and replace your image path to give response from bot to user including text and image.
await context.PostAsync("Here we go with the welcome message\n"+"![AN IMAGE!](Your_Image_URL)");
Another way is, you can also use Card functionality:
private async Task Greeting(IDialogContext context, IAwaitable<IMessageActivity> argument)
{
var message = await argument;
if (string.IsNullOrEmpty(message.Text))
{
// Hero Card
var cardMsg = context.MakeMessage();
var attachment = BotWelcomeCard("Hello,I am a bot.", "");
cardMsg.Attachments.Add(attachment);
await context.PostAsync(cardMsg);
}
else
{
// else code
}
}
private static Attachment BotWelcomeCard(string responseFromQNAMaker, string userQuery)
{
var heroCard = new HeroCard
{
Title = userQuery,
Subtitle = "",
Text = responseFromQNAMaker,
Images = new List<CardImage> { new CardImage("../img/bot.gif") },
Buttons = new List<CardAction> { new CardAction(ActionTypes.ImBack, "Show Menu", value: "Show Bot Menu") }
};
return heroCard.ToAttachment();
}
ok, here is what we end up doing:
<script>
$(document).ready(function () {
$(".wc-header").append("<div class='wc-header-welcome'><img src='/Images/bot.png'/><div>Hello! I am your bot</div>");
});
</script>
Hope it will help save time to someone else.
Related
I need to send QnA users questions and answers to my Azure bot insights using telemetry. Already tried this tutorial :
https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-telemetry?view=azure-bot-service-4.0&tabs=javascript
And this SO posts:
How to get the Qna Maker "Q" from Analytics Application Insights?
How can I save some custom qna maker data in azure app insights?
Thing is, first it's done for LUIS and gives no additional info to Insights, also nothing for QnA...second ones are written for C#...
I need to send question and answer to customEvents logs on Azure insights using NodeJS but I can't find how, any help ?
Thanks in advance.
///// EDIT:
Here's what I got so far (only posted the code related to the telemetry and QnA that's already working fine):
Index.js
const { ApplicationInsightsTelemetryClient, TelemetryInitializerMiddleware } = require('botbuilder-applicationinsights');
const { TelemetryLoggerMiddleware } = require('botbuilder-core');
function getTelemetryClient(instrumentationKey) {
if (instrumentationKey) {
return new ApplicationInsightsTelemetryClient(instrumentationKey);
}
return new NullTelemetryClient();
}
const server = restify.createServer();
server.use(restify.plugins.bodyParser());
var telemetryClient = getTelemetryClient(process.env.InstrumentationKey);
var telemetryLoggerMiddleware = new TelemetryLoggerMiddleware(telemetryClient);
var initializerMiddleware = new TelemetryInitializerMiddleware(telemetryLoggerMiddleware);
adapter.use(initializerMiddleware);
const mybot = new MYBOT(conversationState,userState, telemetryClient);
mybot.js
class MYBOT extends ActivityHandler {
constructor(conversationState,userState,telemetryClient) {
super();
this.conversationState = conversationState;
this.userState = userState;
this.telemetryClient = telemetryClient;
}
}
//This is how I get my qna result:
console.log(this.telemetryClient);
var result = await this.qnaMaker.getAnswers(context);
As You can see, I pass the telemetryClient to the bot file, and if I console log that item I get the complete telemetry object, but how I pass it the user question and answer so its save on insights customevents ??
Found a way to it, in case people that's looking for one of the possible solutions for Node may need it :
Basically, We use the same telemetry code process described in oficial documentation for instancing telemetry on index.js :
const { ApplicationInsightsTelemetryClient, TelemetryInitializerMiddleware } = require('botbuilder-applicationinsights');
const { TelemetryLoggerMiddleware } = require('botbuilder-core');
function getTelemetryClient(instrumentationKey) {
if (instrumentationKey) {
return new ApplicationInsightsTelemetryClient(instrumentationKey);
}
return new NullTelemetryClient();
}
const server = restify.createServer();
server.use(restify.plugins.bodyParser());
var telemetryClient = getTelemetryClient(process.env.InstrumentationKey);
var telemetryLoggerMiddleware = new TelemetryLoggerMiddleware(telemetryClient);
var initializerMiddleware = new TelemetryInitializerMiddleware(telemetryLoggerMiddleware);
adapter.use(initializerMiddleware);
const mybot = new MYBOT(conversationState,userState, telemetryClient);
Then, we pass it to the bot file (bot.js or the one you´re using):
class MYBOT extends ActivityHandler {
constructor(conversationState,userState,telemetryClient) {
super();
this.conversationState = conversationState;
this.userState = userState;
this.telemetryClient = telemetryClient;
}
}
And later in code, You can use telemetry.trackEvent method (Official docs are only in C#), but basically, it allows you to create a custom event you want to track in specifics events in your code, like when You're bot has an error or doesn´t found an answer to user. Code according to previous lines would be like this:
this.telemetryClient.trackEvent(
{name: "myEvent",
properties: {my_user_question: 'Context activity text here or your captured question',
my_bot_answer: 'bot reply or whatever'}
}
); // name and properties are part of the sintaxys, values inside properties object as you may need.
That way, on Azure insights customEvents model You will see records captured with the event name you used, also, with the properties objects as a dict in customdimensions field.
I am creating a chat bot using azure bot framework in Nodejs.
QnA maker to store question answers and one LUIS app.
Now I want to detect end of conversation(either by checking no reply from long time or refreshing a webpage) and add feedback card at the end of conversation.
You can achieve this by use of the onEndDialog method and the use of a separate class to manage the feedback process.
First, I have a component dialog that imports the feedback.js file and calls the associated onTurn() method within onEndDialog.
Next, I create the mainDialog.js file in which MainDialog extends FeedbackDialog. In this way, FeedbackDialog sits "on top" of MainDialog listening for specific user inputs or activities. In this case, it is listening for EndDialog() to be called. You will likely want to add additional validation to be sure it only fires when the EndDialg() you want is called.
Lastly, in the feedback.js file, this is where your feedback code/logic lives. For simplicity, I'm using a community project, botbuilder-feedback, for generating a user feedback interface. The majority of the code is focused on creating and managing the "base" dialog. Additional dialog activity comes from within the botbuilder-feedback package.
For reference, this code is based partly on the 13.core-bot sample found in the Botbuilder-Samples repo.
Hope of help!
feedbackDialog.js:
const { ComponentDialog } = require('botbuilder-dialogs');
const { Feedback } = require('./feedback');
class FeedbackDialog extends ComponentDialog {
constructor() {
super();
this.feedback = new Feedback();
}
async onEndDialog ( innerDc ) {
return await this.feedback.onTurn( innerDc );
}
}
module.exports.FeedbackDialog = FeedbackDialog;
mainDialog.js:
const { FeedbackDialog } = require( './feedbackDialog' );
class MainDialog extends FeedbackDialog {
[...]
}
module.exports.MainDialog = MainDialog;
feedback.js:
const { ActivityTypes } = require('botbuilder');
const { DialogTurnStatus } = require('botbuilder-dialogs');
const Botbuilder_Feedback = require('botbuilder-feedback').Feedback;
class Feedback {
async onTurn(turnContext, next) {
if (turnContext.activity.type === ActivityTypes.Message) {
await Botbuilder_Feedback.sendFeedbackActivity(turnContext, 'Please rate this dialog');
return { 'status': DialogTurnStatus.waiting };
} else {
return { 'status': DialogTurnStatus.cancelled };
}
await next();
};
}
module.exports.Feedback = Feedback;
I'm having difficulty figuring out what most likely is a simple issue, which relates to a 'if then else' problem in my code (NodeJS, Bot Framework v4).
I can't quite figure out why the relevant card isn't being shown depending on the number of semi-colons it finds in the response string from QnAMaker.
When testing with the Bot Framework emulator, it only returns one response type, whether that's plain text or one Rich Card no matter how many semi-colons are in the response.
I've tried to see if it's the length of the string it's having problems with by parsing the number value in the length statement. Didn't make a difference sadly. Notably if I use any other conditional operator such as '===' for example, it breaks the response completely.
const { ActivityTypes, CardFactory } = require('botbuilder');
const { WelcomeCard } = require('./dialogs/welcome');
// const { HeroCard } = require('./dialogs/welcome');
// const { VideoCard } = require('./dialogs/welcome');
class MyBot {
/**
*
* #param {TurnContext} on turn context object.
*/
constructor(qnaServices) {
this.qnaServices = qnaServices;
}
async onTurn(turnContext) {
if (turnContext.activity.type === ActivityTypes.Message) {
for (let i = 0; i < this.qnaServices.length; i++) {
// Perform a call to the QnA Maker service to retrieve matching Question and Answer pairs.
const qnaResults = await this.qnaServices[i].getAnswers(turnContext);
const qnaCard = qnaResults.includes(';');
// If an answer was received from QnA Maker, send the answer back to the user and exit.
if (qnaCard.toString().split(';').length < 3) {
await turnContext.sendActivity(qnaResults[0].answer);
await turnContext.sendActivity({
text: 'Hero Card',
attachments: [CardFactory.heroCard(HeroCard)]
});
} else if (qnaCard.toString().split(';').length > 3) {
await turnContext.sendActivity(qnaResults[0].answer);
await turnContext.sendActivity({
text: 'Video Card',
attachments: [CardFactory.videoCard(VideoCard)]
});
} else if (qnaCard.toString().split(';').length === 0) {
await turnContext.sendActivity(qnaResults[0].answer);
return;
}
}
// If no answers were returned from QnA Maker, reply with help.
await turnContext.sendActivity('No QnA Maker answers were found.');
} else {
await turnContext.sendActivity(`[${ turnContext.activity.type } event detected]`);
} if (turnContext.activity.type === ActivityTypes.ConversationUpdate) {
// Handle ConversationUpdate activity type, which is used to indicates new members add to
// the conversation.
// See https://aka.ms/about-bot-activity-message to learn more about the message and other activity types
// Do we have any new members added to the conversation?
if (turnContext.activity.membersAdded.length !== 0) {
// Iterate over all new members added to the conversation
for (var idx in turnContext.activity.membersAdded) {
// Greet anyone that was not the target (recipient) of this message
// the 'bot' is the recipient for events from the channel,
// context.activity.membersAdded == context.activity.recipient.Id indicates the
// bot was added to the conversation.
if (turnContext.activity.membersAdded[idx].id !== turnContext.activity.recipient.id) {
// Welcome user.
// When activity type is "conversationUpdate" and the member joining the conversation is the bot
// we will send our Welcome Adaptive Card. This will only be sent once, when the Bot joins conversation
// To learn more about Adaptive Cards, see https://aka.ms/msbot-adaptivecards for more details.
const welcomeCard = CardFactory.adaptiveCard(WelcomeCard);
await turnContext.sendActivity({ attachments: [welcomeCard] });
}
}
}
}
}
}
module.exports.MyBot = MyBot;
Ideally, what I'm hoping to see is if I ask a question which has 3 semi-colons in the response, it outputs a Hero Card. If it has more than 3, then a Video Card and if it doesn't have either, a text response.
I'm not a js specialist, but I'm quite confused by the following:
const qnaCard = qnaResults.includes(';');
In Javascript, includes is the following (source):
The includes() method determines whether an array includes a certain
value among its entries, returning true or false as appropriate.
So here your qnaCard is true or false. But it looks like you are trying to use it as if it was containing the text:
if (qnaCard.toString().split(';').length < 3) {
...
You have to work on the object containing the answer: qnaResults[0].answer.
Referring to this link https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-recognize-intent-luis
I took this code section :
// CreateNote dialog
bot.dialog('CreateNote', [
function (session, args, next) {
// Resolve and store any Note.Title entity passed from LUIS.
var intent = args.intent;
var title = builder.EntityRecognizer.findEntity(intent.entities, 'Note.Title');
var note = session.dialogData.note = {
title: title ? title.entity : null,
};
What i don't understand is what does 'CreateNote'represent in this section?
And referring to this line :
var title = builder.EntityRecognizer.findEntity(intent.entities, 'Note.Title');
Assuming my intent name is calendar.add and my entity name calendar.location
will the intent.entities calendar.add.calendar.location create any confusion.
That's the internal identifier of the dialog, and can be referenced where necessary.
Regarding the second part, I don't think it will create confusion, but if you'll get back to this code two weeks later, you will scratch your head thinking why is it named that way, so it's more of a logistics kind of thing, in my opinion.
Taken from the official https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-manage-conversation-flow . The 'CreateNote' is an identifier of your dialog and it can be used like this:
var inMemoryStorage = new builder.MemoryBotStorage();
var bot = new builder.UniversalBot(connector,
function (session) {
session.send("Welcome");
session.beginDialog('perroDialog'); //Use beginDialog with the
//dialog identifier for starting perroDialog
}
).set('storage', inMemoryStorage); // Register in-memory storage
//--------------------------DIALOGS WATERFALL------------------------
bot.dialog('perroDialog',
function (session) {
session.send('You started perroDialog');
session.endDialog(); //Back to / dialog (UniversalBot callback)
});
I'm working on breaking my bot repo into 2 separate repos
A repo to purely handle bot logic
A repo to handle custom chat via directline
Currently , we have a feature where we can trigger the bot to start a specific dialog if its mentioned as a parameter in the URL. So something like
https://foo.com/?param=bar
would trigger the bar dialog
This is the code that handles it
function(userId, conversationId, params, token){
return new Promise((resolve, reject)=>{
var _directlineAddress = {
bot: {"id":config.BOT.ID, "name": config.BOT.HANDLE},
channelId: "directline",
serviceUrl: config.BOT.DIRECTLINE_URL,
useAuth: true,
user:{"id": userId},
"conversation": {"id": conversationId}
}
if(params.options){
var _re = /^\?(\w+)*=(\w+)*/
var _programType = _re.exec(params.options);
if (_programType[1] === "foo") {
var _dialogId = "*:/foo";
}
else {
var _dialogId = "*:/" + _programType[1];
}
} else {
var _dialogId = "*:/";
var _specialParams = {"sessionId":token};
}
bot.beginDialog(_directlineAddress, _dialogId, _specialParams, function(err){
else{
resolve();
}
});
})
};
Since i'm splitting the directline from the bot logic , i will no longer be having access to the bot object. therefore bot.beginDialog would not work here
Is there a way i can trigger the dialog by posting to the Directline API?
No. With Direct Line you will be able to send messages to the bot. I guess that a way to go here will be to define a convention message that you will send via Direct Line and that the bot logic will know that it will have to start a dialog based on it.