I want to send three basic cards as a response to the user through JSON response via fulfillment in python. So is there any way to do it?
P.S.
Basically, I want to show three buttons to a user for call, mail and call on office, but as Basic card shows only one link, I thought is it possible to show multiple cards which contain buttons!
This is the response that I'm sending.
{
"payload": {
"google": {
"expectUserResponse": "true",
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Here is the information of " + user_name
}
},
{
"basicCard": {
"title": name,
"subtitle": subtitle,
"image": {
"url": picture_url,
"accessibilityText": "Picture of " + name
},
"formattedText": msg,
"buttons": [{
"title": "Call " + user_name,
"openUrlAction": {
"url": "tel:+" + contact
if contact is not None
else "",
"androidApp": {
"packageName": "com.android.phone"
},
"versions": []
}
}
if contact is not None
else {
"title": "Send Mail to " + user_name,
"openUrlAction": {
"url": "mailto:" + email,
"androidApp": {
"packageName": "android.intent.extra.EMAIL"
},
"versions": []
}
},
{
"title": "Call on extention",
"openUrlAction": {
"url": "tel:+" + extension
if extension is not None
else "",
"androidApp": {
"packageName": "com.android.phone"
},
"versions": []
}
}
],
"imageDisplayOptions": "WHITE"
}
}
],
"suggestions": [{
"title": "Info of " + manager
if manager is not None
else ""
},
{
"title": "Info of " + hr_manager
if hr_manager is not None
else ""
}
]
}
}
}
}
You can't send multiple cards, and although the buttons on a card take an array, only one element in that array is allowed.
However, you can do something similar by sending a browsing carousel. This lets you send multiple tiles that include a title and may include images, a body, and a link in the same form that a card has a link.
One issue is that I'm not sure if the link is required to be an http or https link, or if other URL forms are allowed, but keep in mind that not all surfaces that may support links support being able to make telephone calls.
Related
I need to build an extremely simple message extension, that once it's clicked (in a teams channel conversation, the same way the "gif" button is clicked) just shows a card with a text and a button (and then when enter is pressed, it's just sent).
I'm a beginner when it comes to message extensions development, I used the instructions from this Microsoft page, and now I'm trying to scrap what I don't need from the generated project and just leave / add what I need.
What I have until now (relevant parts)
in the manifest file
"composeExtensions": [
{
"botId": "{botId}",
"commands": [
{
"id": "createCard",
"context": [
"compose"
],
"description": "Command to run action to create a Card from Compose Box",
"title": "Create Card",
"type": "action",
"parameters": [
{
"name": "title",
"title": "Card title",
"description": "Title for the card",
"inputType": "text"
},
{
"name": "subTitle",
"title": "Subtitle",
"description": "Subtitle for the card",
"inputType": "text"
},
{
"name": "text",
"title": "Text",
"description": "Text for the card",
"inputType": "textarea"
}
]
}
],
"messageHandlers": [
{
"type": "link",
"value": {
"domains": [
"*.botframework.com"
]
}
}
]
}
],
in the bot implementation
export class MessageExtensionBot extends TeamsActivityHandler {
public async handleTeamsMessagingExtensionSubmitAction(
context: TurnContext,
action: any
): Promise<any> {
switch (action.commandId) {
case "createCard":
return createCardCommand(context, action);
default:
throw new Error("NotImplemented");
}
}
}
async function createCardCommand(context: TurnContext, action: any): Promise<any> {
const cardJson = {
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "TextBlock",
"text": "Click the button below to launch Custom Link"
}
],
"actions": [
{
"type": "Action.OpenUrl",
"title": "Launch Custom Link",
"url": "https://google.com"
}
]
};
const adaptiveCard = CardFactory.adaptiveCard(cardJson);
const attachment = {
contentType: adaptiveCard.contentType,
content: adaptiveCard.content,
preview: adaptiveCard,
};
return {
composeExtension: {
type: "result",
attachmentLayout: "list",
attachments: [attachment],
},
};
}
What I have working: when I click the message extension button, I get prompted to enter those three properties (title, subtitle, text), and only after that, I do get my dummy card displayed.
What I need to do:
eliminate that properties prompt completely, I don't need those. I would only need to directly display the adaptive card, without prompting/waiting any other user action
I tried clearing that section from the manifest, but then the app does not work at all: after redeployment I still get prompted for the properties and I get an error no matter if/what I enter
How can I achieve that ?
what do I need to remove/add to the manifest ?
what method do I need to implement in the bot class ?
Can anybody help somehow ?
Thank you.
So if you want to make something like GIF messaging extension in Teams then you should try template - Custom Stickers App Template (C#). If you refer this template you will also get manifest, which you just need to configure with your bot Id.
Explanation of manifest -
The part that you need in manifest is only simple composeExtensions with command type query
"composeExtensions": [
{
"botId": "<bot id>",
"canUpdateConfiguration": false,
"commands": [
{
"id": "Search",
"type": "query",
"title": "Search",
"description": "",
"initialRun": true,
"parameters": [
{
"name": "keyword",
"title": "keyword",
"description": "search for a sticker"
}
]
}
]
}
]
As I understand from the question you asked, you want something like this https://i.stack.imgur.com/vSJ8y.png
In the bot handler -
You need to implement handleTeamsMessagingExtensionQuery(context, query). Here you can use argument query to search for the image you want to send. If you want to see how to implement, we have a sample ready for it(not in context of GIF but you will get general idea)- Sample link.
In this method you need to return messaging extension response. In it you will return all the results(attachment).
Attachment is consist of preview and content. Preview get renders in the messaging extension. Once you click on the preview, its content get render in compose box.
Refer this on how to return messaging extension response having preview as thumbnail card and content as adaptive card - link
You can use this Adaptive card JSON as you just want to show image in it
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Image",
"url": "https://adaptivecards.io/content/cats/1.png"
}
]
}
Refer this for more image related samples.
I am trying to integrate DialogFlow bot with Hangouts Chat (for G Suite). I have enabled the integration on DialogFlow and the basic intents are working fine.
In order to perform backend operations using fulfillment, I have created a firebase cloud function and added this as the webhook URL on DialogFlow fulfillment page.
I have written the cloud function code to identify the intent, and to generate the Webhook response format for a simple text response. This is working, and I am seeing the firestore data being modified in response to the intent.
However for a more complicated intent, I wish to use more of the dynamic card based response that Chat offers. In order to achieve this, I have looked at the documentation for dialogflow card response.
I saw this following code at https://cloud.google.com/dialogflow/docs/integrations/hangouts. When I paste this into the dialogflow intent editor UI under hangouts custom payload (after disabling webhook integration), it works
{
"hangouts": {
"header": {
"title": "Pizza Bot Customer Support",
"subtitle": "pizzabot#example.com",
"imageUrl": "..."
},
"sections": [{
"widgets": [{
"keyValue": {
"icon": "TRAIN",
"topLabel": "Order No.",
"content": "12345"
}
},
{
"keyValue": {
"topLabel": "Status",
"content": "In Delivery"
}
}]
},
{
"header": "Location",
"widgets": [{
"image": {
"imageUrl": "https://dummyimage.com/600x400/000/fff"
}
}]
},
{
"header": "Buttons - i could leave the header out",
"widgets": [{
"buttons": [{
"textButton": {
"text": "OPEN ORDER",
"onClick": {
"openLink": {
"url": "https://example.com/orders/..."
}
}
}
}]
}]
}]
}
}
This is exactly what I need, but I need this response from the webhook. I'm not getting the correct response format to map between the two.
When I try to integrate the same code with the webhook, I am not getting any reply on hangouts chat. When I check the history section on dialogflow UI, here is the response structure as mentioned in Raw interaction log
{
"queryText": "<redacted>",
"parameters": {},
"intent": {
"id": "<redacted>",
"displayName": "<redacted>",
"priority": 500000,
"webhookState": "WEBHOOK_STATE_ENABLED"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 284
},
"languageCode": "en",
"slotfillingMetadata": {
"allRequiredParamsPresent": true
},
"id": "<redacted>",
"sessionId": "<redacted>",
"timestamp": "2020-07-30T12:05:29.094Z",
"source": "agent",
"webhookStatus": {
"webhookUsed": true,
"webhookPayload": {
"hangouts": {
"header": {
"subtitle": "pizzabot#example.com",
"title": "Pizza Bot Customer Support",
"imageUrl": "..."
},
"sections": [
{
"widgets": [
{
"keyValue": {
"content": "12345",
"topLabel": "Order No.",
"icon": "TRAIN"
}
},
{
"keyValue": {
"topLabel": "Status",
"content": "In Delivery"
}
}
]
},
{
"widgets": [
{
"image": {
"imageUrl": "https://dummyimage.com/600x400/000/fff"
}
}
],
"header": "Location"
},
{
"widgets": [
{
"buttons": [
{
"textButton": {
"text": "OPEN ORDER",
"onClick": {
"openLink": {
"url": "https://example.com/orders/..."
}
}
}
}
]
}
],
"header": "Buttons - i could leave the header out"
}
]
}
},
"webhookStatus": {
"message": "Webhook execution successful"
}
},
"agentEnvironmentId": {
"agentId": "<redacted>",
"cloudProjectId": "<redacted>"
}
}
I also found this link on chat docs which explains how to show an interactive card based UI https://developers.google.com/hangouts/chat/how-tos/cards-onclick. However I'm not able to understand how to integrate the same with the webhook.
UPDATE
I have followed a tutorial at https://www.leeboonstra.com/Bots/custom-payloads-rich-cards-dialogflow/ and was able to get the card response to show up using the sample code they mention. It is using this deprecated library (https://github.com/dialogflow/dialogflow-fulfillment-nodejs). Here is the code for that to work,
let payload = new Payload("hangouts", json, {
rawPayload: true,
sendAsMessage: true,
});
agent.add(payload);
Here the json variable should be the previous JSON structure I have mentioned. So now, I'm able to map to the correct response format using the deprecated API. However, I'm not able to get the button to send the right response to the back end. Here is the buttons field that I modified from the previous json,
"buttons": [
{
"textButton": {
"text": "Click Me",
"onClick": {
"action": {
"actionMethodName": "snooze",
"parameters": [
{
"key": "time",
"value": "1 day"
},
{
"key": "id",
"value": "123456"
}
]
}
}
}
}
]
As far as I know, responding to a Google Chat (formerly Hangouts Chat) button isn't possible when using the direct Dialogflow integration.
The problem is that the button response can be sent one of two ways:
An event will be sent back to the bot code indicating the click.
Using the onClick.openLink.url property, as most of your test show.
This will take the person clicking it to the URL in question. But once there, you're taken out of the bot flow.
However, the documentation for the Hangouts Chat integration with Dialogflow doesn't provide any information about how this event is passed to Dialogflow, and the last time I tested it - it isn't.
You can write your own integration using Google Chat's API on something like Cloud Functions or Apps Script and have your script call Dialogflow's Detect Intent API to determine what Intent would be triggered by the user (and determine replies or call the webhook for additional processing). Under this scheme, you can choose how to handle the onClick event. Making your own integration also provides you a way to do Incoming Webhooks, which isn't possible when using the Dialogflow integration.
I have a syntax issue I believe in getting my autopilot response to work. My program works but after the autopilot asks the question, it does not give much time for the user to say the response before stopping/hanging up the call.
Is there a way to add in a timeout or pause? I have tried the syntax for this but it does not work. This is what I have:
"actions": [
{
"collect": {
"name": "user_input",
"questions": [
{
"question": "Welcome to the modem status check line?",
"name": "search",
"type": "Twilio.NUMBER"
}
],
"on_complete": {
"redirect": {
"method": "POST",
"uri": "https://website......"
}
}
}
}
]
}
When I add below
{
"listen":true
}
anywhere in this syntax it does not work and gives me an error of:
.actions[0].collect.questions[0] should NOT have additional properties
I have also tried timeout: 3 and it does not work either.
I have tried
{
"listen":true
}
and
"listen": {
before my task
Twilio developer evangelist here.
You can't use the Listen attribute in a Collect flow, and there is no easy way to add a timeout or pause. You can, however, add on a Validate action to your Collect flow like so and increase the number of max_attempts so your Autopilot bot repeats the question or asks the user to try again/say their response again.
I'm wondering why this is happening because when I use my bots via phone call, the call stays open for quite a long time waiting for the user's response.
exports.handler = function(context, event, callback) {
const responseObject = {
"actions": [
{
"collect": {
"name": "collect_clothes_order",
"questions": [
{
"question": "What is your first name?",
"name": "first_name",
"type": "Twilio.FIRST_NAME"
},
{
"question": "What type of clothes would you like?",
"name": "clothes_type",
"type": "CLOTHING",
"validate": {
"on_failure": {
"messages": [
{
"say": "Sorry, that's not a clothing type we have. We have shirts, shoes, pants, skirts, and dresses."
}
],
"repeat_question": true
},
"on_success": {
"say": "Great, I've got your the clothing type you want."
},
"max_attempts": {
"redirect": "task://collect_fallback",
"num_attempts": 3
}
}
}
],
"on_complete": {
"redirect": "https://rosewood-starling-9398.twil.io/collect"
}
}
}
]
};
callback(null, responseObject);
};
Let me know if this helps at all!
In my flow there are three handlers:
Store finder: when user ask "where is store near me?", it triggers the actions.intent.PERMISSION that ask user for his precise location.
Store finder - yes: if user replies "yes", this will be triggered and the nearest stores will be shown (based on lat/long extracted from request)
Store finder - no: if user replies "no", this will be triggered and only the stores in a specific city will be showed.
The json response will be the same in the two handler for yes and no reply.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Store near: Address Number City.\n Now: OPEN."
}
},
{
"carouselBrowse": {
"items": [
{
"title": "title 0",
"openUrlAction": {
"url": "https://website.it/?0"
},
"description": "description",
"image": {
"url": "https://avatars3.githubusercontent.com/u/5048136?s=460&v=4",
"accessibilityText": "empty"
}
},
{
"title": "title 1",
"openUrlAction": {
"url": "https://website.it/?1"
},
"description": "description",
"image": {
"url": "https://avatars3.githubusercontent.com/u/5048136?s=460&v=4",
"accessibilityText": "empty"
}
},
{
"title": "title 2",
"openUrlAction": {
"url": "https://website.it/?2"
},
"description": "description",
"image": {
"url": "https://avatars3.githubusercontent.com/u/5048136?s=460&v=4",
"accessibilityText": "empty"
}
}
]
}
}
]
},
"userStorage": "{\"lat\":45.4627124, \"long\": 9.1076928}"
}
},
"outputContexts": [
{
"name": "projects/project-name/agent/sessions/ABppEePAPYRhvT9Pcwmu3S61Ka12DUN5gmem7v0p/contexts/context-name",
"lifespanCount": 1,
"parameters": {
"Data": ""
}
}
],
"followupEventInput": {
"parameters": {
"data": {
"listSelect": {}
}
}
}
}
Problem
When I reply "no", the BrowseCarousel works. When I reply "yes", the BrowseCarousel does not work.
I cannot find the reason of this. The json response is exactly the same in the two different intents.
Issue solved.
The answer is: the simulator is broken for this particular event. When using a real device the browsecarousel is working as expected.
USE A REAL DEVICE IF YOU WANT TO SEE EXACTLY WHAT THE RESULT LOOK LIKE.
I'm seeing a strange behavior. I send a basic card with some information but regardless the expectUserResponse json flag, the conversation is not closed in Google Assistant. How come? Is it a bug? Can someone confirm?
JSON that is returned that contains the card:
{
"data": {
"google": {
"expectUserResponse": false,
"systemIntent": {
"intent": "actions.intent.TEXT"
},
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Some text",
"displayText": "Some text"
}
},
{
"basicCard": {
"title": "A title",
"formattedText": "A long text",
"buttons": [
{
"title": "Title button",
"openUrlAction": {
"url": "http://www.google.com"
}
}
]
}
}
]
}
}
}
}
(From https://plus.google.com/102582215848134314158/posts/PG3NbHG9dsr)
The problem is that you're specifying systemIntent. This indicates what system Intent should be used to handle the response.
But you don't want to handle a response, as you've tried to indicate with "expectUserResponse": false.
Given the conflicting information - it chooses to honor the systemIntent setting and waits for the response.
The solution is to remove the systemIntent section completely. In general, unless you're requesting permission or one of the other helper Intents, you can leave this section out anyway since you're using API.AI.