postback button in dialogflow messenger cx - dialogflow-es

Hello I'm trying to create a flow in dialogflow cx, where in case of multiple options I want my user to select 1 option where all the options are buttons.
I have used the default payload but not sure how can I send back which button got clicked to my webhook and return respective info, currently if I click on button it simply open example.com, if I exclude the link it opens same page in new tab.
{
"type": "button",
"icon": {
"type": "chevron_right",
"color": "#FF9800"
},
"text": "Button text 1",
"link" : "www.example.com",
"event": {
"name": "some name",
"languageCode": "en",
"parameters": {}
}
}

For your use case, since the button response type always redirects to a page when clicked, you can consider using suggestion chips instead.
{
"richContent": [
[
{
"options": [
{
"text": "Chip 1"
},
{
"text": "Chip 2"
}
],
"type": "chips"
}
]
]
}
Suggestion chips act like a user text query when the user clicks on it, therefore, you can just create a route that can be triggered by text of the chip and get the text query from the webhook request sent to your webhook to return the respective information. For example:
Intent:
Route:
Then in your webhook, you can get the parameter value in the text field of the webhook request which you will refer to in order to create a webhook response with the respective information.
Here’s an example in Node.js using Express:
app.post("/webhook", (req, res) => {
let option = req.body.text;
let jsonResponse = {
fulfillment_response: {
messages: [
{
text: {
//fulfillment text response to be sent to the agent
text: [`You've chosen the ${option} option`]
}
}
]
}
};
res.json(jsonResponse);
});
Alternatively, you can also use entity types and assign the selected chip into a parameter that will be also sent to your webhook.
To assign the text of the chip to a parameter, the intent of the route should contain training phrases that are annotated to an entity type containing all of the options. For example:
Intent:
Entity Type:
Then in your webhook, you can get the parameter value in the intentInfo.parameters.parameter_id.resolvedValue field of the webhook request which you will refer to in order to create a webhook response with the respective information.
Here’s an example in Node.js using Express:
app.post("/webhook", (req, res) => {
let option = req.body.intentInfo.parameters.options.resolvedValue;
let jsonResponse = {
fulfillment_response: {
messages: [
{
text: {
//fulfillment text response to be sent to the agent
text: [`You've chosen the ${option} option`]
}
}
]
}
};
res.json(jsonResponse);
});
Results:

There is a simple albeit hacky way I have discover possible (tested in es). Which is to make a chip and get its element then force clicking it
We can listen to button click and I detect that it was empty button with just text. Then I use renderCustomCard to make a chip. Everything inside dialogflow messenger are hidden deep inside nested shadowRoot. But as of now its structure allow us to get the chip out to call click() on it. In effect it make it seem very the same as user manually click the chip
const dfMessenger = document.querySelector('df-messenger');
dfMessenger.addEventListener('df-button-clicked',function(event) {
if(event.detail.element.event || event.detail.element.link)
return;
dfMessenger.renderCustomCard([
{
"type": "chips",
"options": [
{
"text": event.detail.element.text
}
]
}
]);
var messageList = dfMessenger.shadowRoot.querySelector("df-messenger-chat").shadowRoot.querySelector("df-message-list").shadowRoot;
var chips = [...messageList.querySelectorAll("df-chips")].flatMap((chips) => [...chips.shadowRoot.querySelectorAll(".df-chips-wrapper>a")]).filter((a) => a.innerHTML.indexOf(event.detail.element.text) > -1);
if(chips.length > 0)
chips.slice(-1)[0].click();
});
Working for today. No guarantee they will block this method in the future. But I actually guess they would implement actual postback button in similar manner later after beta version

Related

How to create Adaptive card and continue dialog on specific response in azure bot node v4

I am currently trying to create a waterfall that starts with an adaptive card.
Originally I had the waterfall working with ChoicePrompt on every step, but on step 1 I wanted 2 choices to openUrl - so changed to an adaptive card to start (is this required or is there a way to openUrl from a specific response in ChoicePrompt the user gives?)
The issue here is that every response (that isn't the openUrl buttons) leads to the adaptive card to repeat itself rather than pass the non-openUrl choice to the next step of the dialog.
I am also storing each dialog response (currently in an array, which I clear at the end of the dialog) to perform a certain action based on all responses combined (is there a better way to save user responses than pushing into an array?)
var answers = [];
async firstStep(stepContext) {
var send = {
text: 'question',
attachments: [
{
"contentType": "application/vnd.microsoft.card.hero",
"content": {
"text": null,
"buttons": [
{
"type": "imBack",
"title": "one",
"value": "one"
},
{
"type": "openUrl",
"title": "two",
"value": "https://example.com"
},
{
"type": "openUrl",
"title": "three",
"value": "https://example.com"
},
]
}
}
]
}
return await stepContext.context.sendActivity(send);
}
async secondStep(stepContext) {
const resp = stepContext.result.value;
answers.push(resp);
return await stepContext.prompt('ChoicePrompt', {
prompt: questions[1],
choices: ChoiceFactory.toChoices(options[1]),
style: ListStyle.suggestedAction
});
}
async thirdStep(stepContext) {
const resp = stepContext.result.value;
answers.push(resp);
return await stepContext.prompt('ChoicePrompt', {
prompt: questions[2],
choices: ChoiceFactory.toChoices(options[2]),
style: ListStyle.suggestedAction
});
}
async finalStep(stepContext) {
const resp = stepContext.result.value;
answers.push(resp);
// get func
var fun = await this.func(answers);
//do stuff with what function returns
// reset quiz
answers = [];
return await stepContext.endDialog();
}
So to summarise, the initial adaptive card, I would like it to continue to repeat itself if anything other than one is returned, but if one is returned I would like it to move onto the next part of the dialog with that value, and save that value (maybe better than I do above).
Lastly, if there is an easy way to openUrl in the current tab and not a new one that would be great.
Any insight here on how to work with hero cards would be really helpful.
Thanks in advance.

Actions on Google - handling carousel responses from dialogflow

I've created a simple Google Assistant interface using DialogFlow with several Carousels that I want to be able to chain together. Whenever I touch a carousel option though, it always goes to the first Intent that has the actions_intent_OPTION event specified. I can get to all of my screens using voice commands, but I'm not sure how to process the touch commands to send the user to right Intent.
Current code in webhook:
const party = 'party';
const cocktail = 'cocktail';
const SELECTED_ITEM_RESPONSES = {
[party]: 'You selected party',
[cocktail]: 'You selected cocktail',
};
function carousel(agent) {
//agent.add(`Item selected`);
app.intent('actions.intent.OPTION', (conv, params, option) => {
let response = 'You did not select any item from the list or carousel';
if (option && SELECTED_ITEM_RESPONSES.hasOwnProperty(option)) {
response = SELECTED_ITEM_RESPONSES[option];
} else {
response = 'You selected an unknown item from the list or carousel';
}
conv.ask(response);
});
}
If I leave the agent.add() line in, then I get "Item selected"... but if I try to use the app.intent code, it says I'm just getting an empty speech response.
I was trying to create 1 intent called CarouselHandler to process all the menu selections. I used the sample code to call the carousel() function when that intent gets hit by the event.
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
intentMap.set('Default Fallback Intent', fallback);
intentMap.set('CarouselHandler', carousel);
agent.handleRequest(intentMap);
You have several questions in here about using options. Let's try to clear a few things up.
Can I get a different Intent triggered for each option?
No. The way options are reported to Dialogflow is that all options will trigger the same Intent. You're responsible for looking at the option string sent and calling another function if you wish.
As you've noted, you need to create an Intent with the Event actions_intent_OPTION.
Your code to handle this might look something like this, although there are other ways to handle it:
app.intent('list.reply.click', (conv, params, option) => {
// Get the user's selection
// Compare the user's selections to each of the item's keys
if (!option) {
conv.ask('You did not select any item from the list or carousel');
} else if (option === 'OPTION_1') {
handleOption1( conv );
} else if (option === 'OPTION_2') {
handleOption2Or3( conv );
} else if (option === 'OPTION_3') {
handleOption2Or3( conv );
} else {
conv.ask('You selected an unknown item from the list, or carousel');
}
});
Can I get a different Intent triggered for each carousel?
Yes. To do this, when you send the carousel you will set an OutgoingContext and delete any other OutgoingContexts you created for a carousel (set their lifespan to 0). Then you will create an Intent that has this Context as an IncomingContext.
The code to send a carousel might look something like this if you're using the actions-on-google library
conv.ask("Here is menu 2");
conv.ask(new List({
title: "Menu 2",
items: {
"OPTION_1": {
title: "Option 1",
description: "Description 1"
},
"OPTION_2": {
title: "Option 2",
description: "Description 2"
},
"OPTION_3": {
title: "Option 3",
description: "Description 3"
},
}
});
conv.contexts.set("menu_2",99);
conv.contexts.delete("menu_1");
conv.contexts.delete("menu_3");
// Don't forget to add suggestions, too
If you're using the dialogflow-fulfillment library, it would be similar, although there are a few differences:
let conv = agent.conv();
conv.ask("Here is menu 2");
conv.ask(new List({
title: "Menu 2",
items: {
"OPTION_1": {
title: "Option 1",
description: "Description 1"
},
"OPTION_2": {
title: "Option 2",
description: "Description 2"
},
"OPTION_3": {
title: "Option 3",
description: "Description 3"
},
}
});
agent.add(conv);
agent.setContext({name:"menu_1", lifespan:0});
agent.setContext({name:"menu_2", lifespan:99});
agent.setContext({name:"menu_3", lifespan:0});
If you were using multivocal, the response configuration might look something like this:
{
Template: {
Text: "Here is menu 2",
Option: {
Type: "carousel",
Title: "Menu 2",
Items: [
{
Title: "Option 1",
Body: "Description 1"
},
{
Title: "Option 2",
Body: "Description 2"
},
{
Title: "Option 3",
Body: "Description 3"
}
]
}
},
Context: [
{
name: "menu_1",
lifetime: 0
},
{
name: "menu_2",
lifetime: 99
},
{
name: "menu_3",
lifetime: 0
}
]
}
The Intent that would capture this option suggestion might look something like this:
Your code to handle this would be similar as above, except using the different Intent name.
If there are overlapping options between the handlers, they could call the same function that actually does the work (again, as illustrated above).
How can I handle voice and option responses the same way?
AoG, in some cases, will use the voice response to trigger the option. This is what the aliases are for. But even beyond this, if you have Intents that catch phrases from the user and an Intent that works with the Options, all you need to do is have the fulfillment code call the same function.
Why doesn't the code work?
The line
app.intent('actions.intent.OPTION', (conv, params, option) => {
Probably doesn't do what you think it does. Unless this is the name for the Intent in Dialogflow, the string actions.intent.OPTION won't be seen in your handler. It is also how you register an Intent handler with the actions-on-google library.
It also looks like you're mixing the dialogflow-fulfillment library way of registering Intent handlers with the actions-on-google library way of registering Intent handlers through your carousel() function. Don't do this. (This may also be part of the cause about why replies aren't getting back correctly.)

Telegram callback_data for link buttons

I'm sending a link button throught a Telegram bot and I would like to get the callback_data after the user opens the url.
My options are:
var options = {
parse_mode: "Markdown",
reply_markup: {
inline_keyboard: btns
}
};
where btns is
[
[{ text: "Read first", url: "http://any", callback_data: "any_relevant_data }]
]
The button shows perfectly, the link works, but no callback is triggered and I never hit
bot.on('callback_query', (callback_message) => { //any action });
Is this a missing feature or it's me, doing something wrong?
According to API Document, you can't use url and text in the same time.
This object represents one button of an inline keyboard.
You must use exactly one of the optional fields.

Actions on Google API v2 using c# webhook

I currently have an app running on actions on google using API.AI. I have modified all the response members to be camelCase as suggested and got it working. Now I am trying to return a basic card, but I can not figure out how to properly return it.
Does anyone have the most basic JSON response, returning a basic card to the Google assistant?
currently, the most basic v2 API response I can have is the following:
{
speech: "",
displayText: "",
data: {
google: {
expectUserResponse: true,
isSsml: true,
permissionsRequest: null
}
},
contextOut: [ ],
source: "webhook"
}
I have some Gists showing the JSON responses here.
Right now, it includes Lists, Basic Card and Carousel, but I will add Transactions hopefully soon. Hope it might help somehow
This is what I use for Voice Tic Tac Toe
"google": {
"expect_user_response": true,
"rich_response": {
"items": [
{
"simple_response": {
"text_to_speech": "Your move was top. I moved left"
}
},
{
"basic_card": {
"image": {
"url": "https://server/010200000.png",
"accessibility_text": "Tic Tac Toe game board"
}
}
}
]
}
}
I ran the facts about google action and looked at the fulfillment JSON output to learn how to do this.
https://github.com/actions-on-google/apiai-facts-about-google-nodejs
Note that all responses must include at least one, and no more than two, simple_response items.

How telegram bot can get file_id of uploaded file?

In telegram API documentation I see: "You can either pass a file_id as String to resend a photo that is already on the Telegram servers", but I can't find ways to get file_id of uploaded file. How can I get it?
Its depended to your content_types ,for example:
Video:
message.video.file_id
Audio:
message.audio.file_id
Photo:
message.photo[2].file_id
For more see this link.
This is the easiest way I've found to do it.
Upload your file to any chat and forward the message to #RawDataBot. It will return something like this:
{
"update_id": 754677603,
"message": {
"message_id": 403656,
"from": {
"id": xxx,
"is_bot": false,
"first_name": "xxx",
"username": "xxx",
"language_code": "en"
},
"chat": {
"id": xxx,
"first_name": "xxx",
"username": "xxx",
"type": "private"
},
"date": 1589342513,
"forward_from": {
"id": xxx,
"is_bot": false,
"first_name": "xxx",
"username": "xxx",
"language_code": "en"
},
"forward_date": 1589342184,
"document": {
"file_name": "filename.pdf",
"mime_type": "application/pdf",
"file_id": "This_Is_The_Thing_You_Need",
"file_unique_id": "notthis",
"file_size": 123605
}
}
}
What you need is the string under file_id. Once you have copied that, you can simply the following code to send the message.
context.bot.sendDocument(chat_id=update.effective_chat.id,
document = "Your_FILE_ID_HERE")
Depending on the method (File type) which you chose to send a file, after sending a file to Telegram a response is returned. For example if you send a MP3 file to Telegram using sendAudio method, Telegram returns an Audio object which contains the file ID.
Source: https://core.telegram.org/bots/api#audio
In addition to the answers above, you can log Updates that comes to your bot, Either from https://api.telegram.org/bot'.BOT_TOKEN.'/getUpdates or throw updates that come in your application. there you will find a Json property like below:
{
"update_id" = 1111111,
"message" =
{
"message_id" = 1111111,
"from" =
{
"id" = 111111,
...
}
"chat" =
{
"id" = 111111,
...
}
"date" = 111111,
"photo" =
{
{
"file_id" = HERE IS YOU FILE ID 1,
"file_size" => XXXX,
"width" => XX,
"height" => XX,
}
}
}
}
Say you receive a Message with an array of PhotoSize
https://core.telegram.org/bots/api#photosize
As you can see, there's a file_id, you can use this to send a photo through sendPhoto.
If we assume Update is an object, with in it a Message object, which in turn provides a Chat object with in it a id of the chat where the initial message came from and an array of PhotoSize (excuse me for using PHP here, but that's my main language...)
$update->message->photo is how you can access the array.
Use some kind of For loop to iterate over the items, or just access the first one if the array isn't bigger than 1.
After that, you can use the result(s) to extract the file_id and send it as a string via sendPhoto's photo parameter and the Chat ID via the chat_id parameter.
I hope this helped!
P.S. Here is a diagram of my current implementation of the API, i hope it brings some clarity to you!
if you use PHP:
you can write this line for full size:
$file_id = $updates['message']['photo'][1]['file_id'];
and this line for thumb:
$file_id = $updates['message']['photo'][0]['file_id'];
According to the latest docs (v20.0a6) plenty of classes have been changed. I have found that the easiest way to get started with files is using the effective_attachment property.
async def handle_file(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
media_item = await context.bot.get_file(update.message.effective_attachment[0].file_id)
media_url = media_item.file_path
For declaring the handler there have also been changes to filters, here is a simple way to declare it:
application.add_handler(MessageHandler(filters.ATTACHMENT, handle_file))

Resources