I am trying to build a Slack app by using AWS lambda and NodeJs. The issue I am facing is that I don't understand in what format does the SlackBot need the JSON payload from my AWS lambda code to display it.
I followed the tutorial video suggested on Slack linked here. In the video, the following JSON object is created and returned from the AWS lambda.
const response = {
statusCode: 200,
body: "Sample Response",
};
The SlackBot posts the text entered in the 'body' property (i.e. 'Sample Response' in this case) as a response. This seems to be working well. But, I need some more flair than simple text so I looked into their Block Kit UI builder. But there seems to be no documentation for how to do this with a similar 'response' JSON object like this. How exactly am I supposed to use the JSON object created by the UI builder?
I do not know much about Web development so sorry if this seems like a very basic question. I wish there was a sample Slack app on their website which showed this.
The following may work for you (I use a similar one on the production);
{
"channel": "your-channel-name",
"username": "channel-username",
"attachments": [
{
"title": "some-title",
"fallback": "some message",
"text": "some text",
"fields": [
{
"title": "sub-title",
"value": "sub-title-value",
"short": true
},
{
"title": "some-other-title",
"value": "some-value"
}
],
"color": "red"
}
],
"icon_emoji": "gun"
}
This link or this one may provide some extra information.
Related
I am new to using azure bot service
I have a bot that is used inside Microsoft Teams
I use this bot to send and receive messages inside Microsoft teams
When the user tags the bot and sends a text message I get it in the activity response
But when the user tags the bot and sends an image attached to the message I do not get any information about that image
"attachments" : [ {
"contentType" : "text/html",
"content" : "<div><div><span itemscope=\"\" itemtype=\"http://schema.skype.com/Mention\" itemid=\"0\">BotName12</span> Msg </div>\n</div>"
} ],
above is what I get ,
I expect to get also "contentUrl","content" and "thumbnailUrl"
Is it related to permissions ? If so, where do I define it,
On the Bot, or on the Application related to it?
Thanks
I tried to reproduce the procedure and the image is being uploaded perfectly and the text message also sent successfully in MS Teams.
Follow the instructions mentioned below screen.
Click on Channels to choose MS Teams
Select Microsoft Teams and attach the created bot to Teams Channel. After channel attachment, click on open in teams
The above procedure may not be the code part, but the procedure to create the bot and attach to teams perfectly.
The required contentType and other attachments information can be retrieved by setting name property and contentUrl property to the URL related to the media file. content, contentType, contentUrl, name, thumbnailUrl are the properties need to be given. Then the Activity object which is responsible for your message will specify the Attachment object within your attachments array.
For example, consider the following code for the single image upload.
POST https://your-website/apis/v3/conversations/abcd1234/activities/5d5cdc723
Authorization: Bearer ACCESS_TOKEN
Content-Type: application/json
Here comes, the JSON response for the content uploaded as an attachment
{
"type": "message",
"from": {
"id": "234567",
"name": "sender's name"
},
"conversation": {
"id": "user234",
"name": "conversation's name"
},
"recipient": {
"id": "bot234",
"name": "recipient's name"
},
"text": "Here's a picture of the duck I was telling you about.",
"attachments": [
{
"contentType": "image/jpg",
"contentUrl": "image URL"
}
],
"replyToId": "234"
}
To implement different types of media cards if needed refer. We can include AudioCards as well.
I have an application within watson assistant that consumes many services from other endpoints. and I would like to call this conversation (from watson) within a google assistant conversation in a certain intention. for example i will develop a rich conversation on google assistant and in one of the options i will call watson's conversation.
I tried as follows, but it didn't work. does anyone know any example that can help me?
{"locale": "pt-BR",
"actions": [
{
"description": "Launch intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "mainConversation"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"description": "Direct access",
"name": "BUY",
"fulfillment": {
"conversationName": "ExampleAction"
},
"intent": {
"name": "com.example.ExampleAction.BUY",
"trigger": {
"queryPatterns": [
"teste",
"azul",
"start"
]
}
}
}
],
"conversations": {
"mainConversation": {
"name": "mainConversation",
"url": "https://us-central1-ericanovo-798cc.cloudfunctions.net/webhook",
"fulfillmentApiVersion": 2
},
"BUY": {
"name": "ExampleAction",
"url": "https://orquestrador-sulamerica-teste.mybluemix.net/api/v1/chat/google?externaltoken=574213c0-e904-11e9-9970-ff484aa25334",
"fulfillmentApiVersion": 2
}
}
}
thanks
That won't work because the webhook for everything published under the same project has to be the same URL. You are expected to handle all the Intents and "actions" at that webhook.
In your case, you would also need to make sure the request is formatted the way the Watson API would be expecting it. The Assistant will send it using the Conversation Webhook Format, and it sounds like you would send it using Watson's Analyze Text API.
You're not showing any of your code, so it is difficult to be sure - but the first would be in a JSON format that you can extract. You can then use a library in Node (such as request-promise to make the calls to Watson. Based on the result from Watson, you'd need to format the results as a response and return it to the Assistant.
It isn't clear why you'd need multiple webhooks specifically, although it is certainly possible that some Intents may make different API calls than others.
Keep in mind that your custom Intents will only be valid on invocation. Subsequent Intents will all be TEXT Intents.
I am building a bot for google assistant. I have enabled fulfillment section for some intents. Dialog flow sends the request to the fulfillment url. The url is executed and a hard coded response is returned. I can see the response in the assistant simulator. Everything works fine except one thing. The request is empty.I can't access fields that are supposed to be present in the request.
I have accessed the same url using post request from a python code and it displays the parameters. So, there are no issues in the code. I think I am missing some configuration option.
I was expecting the post body in the following format:
POST body:
{
"responseId": "ea3d77e8-ae27-41a4-9e1d-174bd461b68c",
"session": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0",
"queryResult": {
"queryText": "user's original query to your agent",
"parameters": {
"param": "param value"
},
"allRequiredParamsPresent": true,
"fulfillmentText": "Text defined in Dialogflow's console for the intent that was matched",
"fulfillmentMessages": [
{
"text": {
"text": [
"Text defined in Dialogflow's console for the intent that was matched"
]
}
}
],
"outputContexts": [
{
"name": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0/contexts/generic",
"lifespanCount": 5,
"parameters": {
"param": "param value"
}
}
],
"intent": {
"name": "projects/your-agents-project-id/agent/intents/29bcd7f8-f717-4261-a8fd-2d3e451b8af8",
"displayName": "Matched Intent Name"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {},
"languageCode": "en"
},
"originalDetectIntentRequest": {}
}
But when I print the post data using print(request.POST), the actual post request shown is
One more thing: Does dialog flow append the action at the end of the fulfillment url? If so, I will have to handle the logic separately. I have done it without considering the action name. But a lot of my stuff is hacked, so I just want to be sure.
On another note, is dialogflow good enough? It has worked fine on a few examples similar to what it was trained on. How many training samples does it need to work properly? What is the underlying algorithm used in dialogflow? Or should I use the fulfillment url and handle everything on my own? I am inclined towards the later. I do not have too much faith in the existing chatbots.
Any help is appreciated.
If the Fallback Intent is the one being triggered, then you wouldn't get any parameters since this means that nothing else matched.
Got it. Used request.body. This solves the problem. Then parsed it using json.loads and accessed the parameters.
I am a newbie to azure logic app. My aim is to send some variables to logic app(via java service code, which in turn invokes the request trigger with the provided POST URL as REST API) and obtain response as JSON.
Currently i have created a request trigger and the JSON schema looks as follows:-
{
"$schema": "http://json-schema.org/draft-04/schema#",
"definitions": {},
"id": "http://example.com/example.json",
"properties": {
"CustomerName": {
"type": "string"
},
"InvoiceFee": {
"type": "integer"
},
"InvoiceNo": {
"type": "integer"
}
},
"required": [
"CustomerName",
"InvoiceFee",
"InvoiceNo"
],
"type": "object"
}
From the request trigger, i am directing to response action and the following to be returned as JSON response.
{
"CustomerName": #{triggerBody()['CustomerName']},
"InvoiceFee": #{triggerBody()['InvoiceFee']},
"InvoiceNo": #{triggerBody()['InvoiceNo']}
}
Screenshot below:-
enter image description here
Could you please provide me some reference links of how to access logic app from java service?
I am don't know regarding how to pass the custom created object such that the parameters of the object maps to "CustomerName", "InvoiceNo", "InvoiceFee" properties.
My created java service code is as follows:-
InvoiceDTO invoiceDTOObject2 = new InvoiceDTO();
invoiceDTOObject2.setCustomerName("Sakthivel");
invoiceDTOObject2.setInvoiceNo(123);
invoiceDTOObject2.setInvoiceFee(4000);
ResteasyClient client = new ResteasyClientBuilder().build();
ResteasyWebTarget target = client.target("URL TO PROVID").resolveTemplate("properties", invoiceDTOObject2);
Response response = target.request().get();
String jsonResponse = response.readEntity(String.class);
System.out.println("JSON Reponse "+jsonResponse);
Looking at your code
Response response = target.request().get();
You are doing a GET-operation. Your Logic App HTTP Trigger would require you to perform a POST-operation using your InvoiceDTO-entity as body (serialized as JSON).
So should look something like this:
Response response = target.request().post( InvoiceDTO.entity(invoiceDTOObject2, MediaType.APPLICATION_JSON));
Not sure if it's 100% correct, my java is a little rusty, but that's the general idea.
I'm trying to use next API method: https://msdn.microsoft.com/office/office365/APi/mail-rest-operations#SendMessages. Sending messages without attachments works just fine, but I can not understand how to send message with attachments.
According to docs, Message structure can contain array of Attachments with items of type https://msdn.microsoft.com/office/office365/APi/complex-types-for-mail-contacts-calendar#RESTAPIResourcesFileAttachment . Problem is in the field ContentBytes -- it is impossible to dump bytes to JSON before sending request to this API method (actually dumping any BLOB to JSON is nonsense).
How should I pass Attachments using REST API at all?
Thanks.
There's an example of passing the attachment on that page: https://msdn.microsoft.com/office/office365/APi/mail-rest-operations#SendMessageOnTheFly
I know I'm 3 years late but, you can look at this example:
https://msdn.microsoft.com/office/office365/APi/mail-rest-operations#create-and-send-messages (if you don't get forwarded to the section "Create and send messages", please scroll manually).
I know it is 365 and not Microsoft Graph but request is absolutely same.
This is basically how JSON representation of the post method looks:
https://outlook.office.com/api/v2.0/me/sendmail
{
"Message":
{
"Subject": "Meet for lunch?",
"Body": {
"ContentType": "Text",
"Content": "The new cafeteria is open."
},
"ToRecipients": [
{
"EmailAddress": {
"Address": "garthf#a830edad9050849NDA1.onmicrosoft.com"
}
}
],
"Attachments": [
{
"#odata.type": "#Microsoft.OutlookServices.FileAttachment",
"Name": "menu.txt",
"ContentBytes": "bWFjIGFuZCBjaGVlc2UgdG9kYXk="
}
]
}
}