Push Notifications/Toasts Summary for Adaptive Cards on Microsoft Teams - webhooks

I've built a few tools that send messages via webhooks to Microsoft Teams and I decided to switch to Adaptive Cards to make the messages sent a bit more easy to read and laid out, since Adaptive Cards can be stylised a lot more than the standard MessageCard (0365 Connector) and I've managed to achieve that but unfortunately hit a bit of a snag at the finish line.
When a push notification is sent with an Adaptive Card, instead of giving a brief breakdown or first few lines of the message, it simply says Card. It also shows this way as well under the Notifications tab of Microsoft Teams (PC or Mobile), so as you can imagine it's a little irritating since I send out a lot of messages and you need to actually tap/click thru to read them without seeing a summary beforehand.
In the old style/O365 connector, I would simply use the summary field and it would work just fine.
//O365 Connector
"#type": "MessageCard",
"#context": "http://schema.org/extensions",
"summary": "John Doe commented on Trello",
"title": "Project Tango",
I've seen the following provided as advice for Bot Frameworks:
var response = MessageFactory.Text(string.Empty);
response.Attachments.Add(cardAttachment);
response.Summary = "showing custom greeeting from the Bot - rather than a card";
await turnContext.SendActivityAsync(response, cancellationToken);
But that does not apply here since I am using webhooks... but I tried Summary as a key in the payload to see if it helped and it didn't.
https://adaptivecards.io/schemas/adaptive-card.json [schema]
I took a look at the adaptive-card.json schema and I can't see anything in there that would come close to looking like it would impact the toast/push notifications either. I did try fallbackText but I think that's only used in the event the renderer is unable to load the adaptive-card, and not used at all for the summary.
Any ideas? or does utilising adaptive cards mean I need to sacrifice the ability to summarise information in notifications/toasts?

UPDATE
Issue is fixed. You can try sending the JSON below
{
"type":"message",
"summary": "my summary",
"attachments":[
{
"contentType":"application/vnd.microsoft.card.adaptive",
"contentUrl":null,
"content":{
"$schema":"http://adaptivecards.io/schemas/adaptive-card.json",
"type":"AdaptiveCard",
"version":"1.2",
"body":[
{
"type": "TextBlock",
"text": "For Samples and Templates, see [https://adaptivecards.io/samples](https://adaptivecards.io/samples)"
}
]
}
}
]
}
Currently there are no workaround for this issue, we have raised bug to engineering team to track this internally. We will let you know once we have updates on this.

Related

How to create envelope from template with checkbox tab data?

DocuSign has a mountain of great documentation when it pertains to Java, Ruby, Node.js, C# and the like, but their documentation is relatively light on sending raw JSON requests. I have a template that has checkbox tabs and I need to be able to create a document to sign with prefilled checkbox data. No examples exist on how to do that with a raw JSON request.
How do you create an envelope from template with checkbox tab data?
After reverse engineering the format from the /accounts/$accountId/envelopes/$envelopeId/documents/$documentId/tabs endpoint, I was able to discover that the checkboxTabs node of your request must look like this:
"checkboxTabs": [
{
"tabLabel": "ACCESSORIES",
"name": "LIGHT_USB_C_ADAPTER",
"selected": "true"
}
]
Glad you found the answer. Just wanted to point out that we do show how to make direct JSON calls in both our reference material as well as in our code examples. We use bash scripts with curl to make these calls, so you may see "Bash" or "CURL" in the title of the language when you look up our code examples.
For your case you can dinf it here - https://developers.docusign.com/esign-rest-api/code-examples/set-envelope-tab-values
Just wanted to add, You can always visit corresponding SDK Method documentation in DocuSign website. For Envelopes::createEnvelope method, refer - https://developers.docusign.com/esign-rest-api/reference/Envelopes/Envelopes/create
You will see the definition of the checkbox and use the available options accordingly.

Trigger one intent at the end of another

Sorry - very newbie question. I have a number of separate intents (let’s call them intent1, intent2, intent3, etc) which constitute a basic FAQ chatbot.
I want users to be able to trigger these independently but I’d also like to guide them from one to the next. So I’d like to be able, at the end of responding to intent1 to ask ‘would you like to hear about intent2 or ask another question’ and respond appropriately.
So far I’ve not messed with node backends etc so there is a possibility the answer lies there.
You don't need to use a fulfillment webhook, but it does make things somewhat easier.
First, remember that Intents handle what the user says, and not what you do with that. Dialogflow's responses appear to suggest they do, but once you get into more complicated interactions (where two different things from the user need to respond the same way), you find that the response section becomes less useful, and you should store your responses in code.
Those responses should include the prompt about the next question.
During fulfillment you should also set a Context (and possibly clear older contexts) to keep track of which question you are suggesting for them next.
This way - the next response will be triggered by two possible Intents:
One that directly asks a question.
In these cases, you'll use the Intent or action name to determine which question was asked, and provide an answer (and followup prompt).
One that responds "yes".
For this, you'll get the Context that includes information about the question you prompted them for, and provide that answer (and followup prompt).
While the "Followup Intent" feature sounds tempting, it is likely not what you want to use, since it does not allow multiple ways to access it and forces a very narrow path.
You may also wish to take a look at Thinking For Voice: Design Conversations, Not Logic for more about designing your conversation (and how to model it in Dialogflow, in a followup article).
okay, I am late here! Yes, It is possible with the event. I have recently done this.
function helloIntent(agent){
agent.add("Hi, how are you ?");
agent.setFollowupEvent({ name: 'NextIntentEvent', parameters: {} }); // this will do the trick
}
app.js
let intentMap = new Map();
intentMap.set("Hello Intent", helloIntent);
NextIntentEvent should be an event name defined in the intent that you want to trigger.
some code removed for brevity
If you want to make chain of conversation there are few options for that.
Slot filling
Here you need to add your questions as prompt and you can make that optional so if user wants to make the conversation they proceed by answering that question. Example
Contexts
You can set the follow-up question with contexts, Example
Events
Events are something that you can trigger from your web hook once you send the response of your current question,
To trigger the event, Example
POST Authorization: Bearer <AccessToken>
https://dialogflow.googleapis.com/v2/projects/<ProjectID>/agent/sessions/<SessionID>:detectIntent
{
"queryInput": {
"event": {
"name": "event-name",
"parameters": {
"parameter-name-1": "parameter-value-1",
"parameter-name-2": "parameter-value-2",
...
},
"languageCode": "en-US"
}
}
}

Getstream individual notifications for different reaction types?

As far as I can tell, and from a couple of small experiments, all reactions on an activity are returned together as part of an activity (or can an activity be given with only a subset of reactions?) Also, the seen/read fields are also set for the activity, not for individual reactions. Based on this, granular notifications for reactions like "John liked your post" and "Jane commented on your post" with accurate seen/read fields for each individual reaction are not possible (unless you make comments an activity instead of a reaction).
Is there a recommended way to implement reactions and notifications that allows for the same features Facebook has?
Reactions are indeed returned as part of the activity, but they are mapped by the reaction kind. (like, love etc).
As for the notifications, instead of using reactions, you can use activities to achieve this:
activity = {
"actor": "john:1",
"verb": "like",
"object": "post:1"
}
This, in combination with using notification feeds should get you the desired result.
Maybe it wasn't available at the time you wrote the post, but now you can notify people about a reaction using the targetFeed prop as explained in the doc https://getstream.io/activity-feeds/docs/php/reactions_read_feeds/#notify-other-feeds

BotFramework: Create Suggested Actions without text attribute

I'm creating a bot in DirectLine. I'm trying to use SuggestedActions to display a suggested action and I don't want to include the text attribute for that. When I try to run my code without the text attribute, I see a blank message being displayed. How can I avoid that?
My code
var msg = new builder.Message(session)
.suggestedActions(
builder.SuggestedActions.create(
session, [
builder.CardAction.imBack(session, "disconnect", "Disconnect"),
]
));
session.send(msg);
The Output i'm getting:
Per my understanding, you want a button which is shown absoluted at bottom and always display to remind your agent that he can disconnect conversation any time.
However, per me testing and understanding, in my opinion, there 2 points that it's maybe not a good idea to achieve this feature:
SuggestedAction is based on Messasge in Bot framework. And basically Bot application is for conversation. So every message between user and bot renderred in different channels should always be contained in a textbox, shown like in your capture. We cannot bypass this feature.
Per your requirements, I think you want this button should be always display unless the agent click it. But I didn't find any feature like this in Bot framework, and you may need to send this meesage additionally beside every message from bot, which is not graceful and will raise unpredictable risk.
My suggestion is that you can create a triggerAction to handle global disconnect requests. Refer https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-actions for more info.

Webhook generated list fetch option selected by user

I'm pretty new in API.AI and Google Actions. I have a list of items which is generated by a fulfillment. I want to fetch the option selected by user. I've tried reading the documentation but I can't seem to understand it.
https://developers.google.com/actions/assistant/responses#handling_a_selected_item
I also tried setting follow up intents but it wont work. It always ends up giving fallback responses.
I'm trying to search a product or something and the result is displayed using list selector format. I want to fetch the option I selected. This a search_product intent and I have a follow up intent choose_product
You have two options to get information on a Actions on Google list/carousel selection event in API.AI:
Use API.AI's actions_intent_OPTION event
As Prisoner already mentioned, you can create an intent with actions_intent_OPTION. This intent will match queries that include a list/carousel selection as documented here.
Use a webhook
API.AI will pass the list/carousel selection to your webhook which can be retrieved by either:
A) using Google's Action on Google Node.js client library using the app.getContextArgument() method.
B) Use the originalRequest JSON attirbute in the body of the reques to your webhook to retrieve list/carousel selection events. The structure of a list/carousel selection event webhook request will look something like this:
{
"originalRequest": {
"data": {
"inputs": [
{
"rawInputs": [
{
"query": "Today's Word",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "Today's Word",
"name": "OPTION"
}
],
"intent": "actions.intent.OPTION"
}
],
...
This is a sideways answer to your question - but if you're new to Actions, then it may be that you're not really understanding the best approaches to designing your own Actions.
Instead of focusing on the more advanced response types (such as lists), focus instead on the conversation you want to have with your user. Don't try to limit their responses - expand on what you think you can accept. Focus on the basic conversational elements and your basic conversational responses.
Once you have implemented a good conversation, then you can go back and add elements which help that conversation. The list should be a suggestion of what the user can do, not a limit of what they must do.
With conversational interfaces, we must think outside the dialog box.
Include 'actions_intent_OPTION' in the event section of the intent that you are trying to trigger when an item is selected from list/carousel (both work).
Then use this code in the function that you will trigger in your webhook instead of getContextArguments() or getItemSelected():
const param = assistant.getArgument('OPTION');
OR
app.getArgument('OPTION');
depending on what you named your ApiAiApp (i.e.):
let Assistant = require('actions-on-google').ApiAiAssistant;
const assistant = new Assistant({request: req, response: response});
Then, proceed with how it's done in the rest of the example in the documentation for list/carousel helpers. I don't know exactly why this works, but this method apparently retrieves the actions_intent_OPTION parameter from the JSON request.
I think the issue is that responses that are generated by clicking on a list (as opposed to being spoken) end up with an event of actions_intent_OPTION, so API.AI requires you to do one of two things:
Either create an Intent with this Event (and other contexts, if you wish, to help determine which list is being handled) like this:
Or create a Fallback Intent with the specific Context you want (ie - not your Default Fallback Intent).
The latter seems like the best approach since it will also cover voice responses.
(Or do both, I guess.)

Resources