I have a chatbot using the botframework of Microsoft, with my webapp running the chatbot on Azure. How can I return a picture as an answer to a message. We have clients using Skype, Messenger and KiK
Take a look at the docs section on image and file attachments
replyMessage.Attachments.Add(new Attachment()
{
ContentUrl = "https://upload.wikimedia.org/wikipedia/en/a/a6/Bender_Rodriguez.png",
ContentType = "image/png"
});
Or as JSON:
{
"attachments": [
{
"contentType": "image/png",
"contentUrl": "https://upload.wikimedia.org/wikipedia/en/a/a6/Bender_Rodriguez.png"
}
]
}
You can also send Rich Cards:
replyMessage.Attachments = new List<Attachment>();
replyMessage.Attachments.Add(new Attachment()
{
Title = "Bender",
TitleLink = "https://en.wikipedia.org/wiki/Bender_(Futurama)",
ThumbnailUrl = "http://www.theoldrobots.com/images62/Bender-18.JPG",
Text = "Bender Bending Rodríguez, commonly known as Bender, is a main character in the animated television series Futurama.",
FallbackText = "Bender: http://www.theoldrobots.com/images62/Bender-18.JPG"
});
Related
I use "Google Apps Script" to develop my telegram bot and I want to create a inline keyboard to receive users response data, so I have reference the telegram bot api "https://core.telegram.org/bots/api#available-methods" and create a function to do that,but Why I click the inline keyboard in my telegram bot, the google sheet table can not received the "callback_data" value ?
enter image description here
Thanks!
function start(estringa){
var InlineKeyboardMarkup = {
inline_keyboard : [
[
{"text":"A",'callback_data': "chinese"},
{"text":"B",'callback_data': "english"}
],
[
{"text":"C",'callback_data': "japanese"},
{"text":"D",'callback_data': "korean"}
]
]
}
var id = estringa.message.chat.id.toFixed()
var payload;
if (estringa.message.text){
payload = {
"method": "sendMessage",
"chat_id": id,
"text": '想到哪個網站呢?' ,
'reply_markup': JSON.stringify(InlineKeyboardMarkup)
}
}
else {
payload = {
"method": "sendMessage",
"chat_id": id,
"text": "hello world!"
}
}
return payload // 返回程式碼上層,重複執行判斷
}
I have created hero card for messages with prompt buttons from qna maker. The hero card embedded response has a title and buttons. The buttons are displayed properly and worked as expected, but the title words are not wrapped properly.
if (resResult) {
var answer = resResult.answer;
var resultContext = resResult.context;
var prompts = resultContext && resultContext.prompts;
if (prompts && prompts.length) {
var card = CardFactory.heroCard(
answer,
[],
prompts.map(prompt => ({
type: 'messageBack',
title: prompt.displayText,
displayText: prompt.displayText,
text: prompt.displayText,
value: {
qnaId: prompt.qnaId
}
}))
);
answer = MessageFactory.attachment(card);
}
await context.sendActivity(answer);
}
The output response in chat window / Emulator is
The title text which is displayed needs to wrapped and font style and color should align with common text styles of chat bot.
Thanks in advance
const quickReply = {
channelData: {
"message": {
"text": "Where are you?",
"quick_replies": [
{
"content_type":"location"
}
]
}
}
}
return await stepContext.prompt(LOCATION, { prompt: quickReply }, InputHints.ExpectingInput);
I read syntax to prompt location is { "content_type":"location"} but fb message don't show button share location and i have error, please help me.
Facebook Messenger removed support for location quick replies last October, which is why the button isn’t rendering in the chat window. Take a look at Facebook’s developer document on quick replies for more details.
https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies/#locations
I am working on a simple nodejs console utility that will upload images for the training of a Custom Vision model. I do this mainly because the customvision web app won't let you tag multiple images at once.
tl;dr: How to post images into the CreateImagesFromFiles API endpoint?
I cannot figure out how to pass images that I want to upload. The documentation just defines a string as a type for one of the properties (content I guess). I tried passing path to local file, url to online file and even base64 encoded image as a string. Nothing passed.
They got a testing console (blue button "Open API testing console" at the linked docs page) but once again... it's vague and won't tell you what kind of data it actually expects.
The code here isn't that relevant, but maybe it helps...
const options = {
host: 'southcentralus.api.cognitive.microsoft.com',
path: `/customvision/v2.0/Training/projects/${projectId}/images/files`,
method: 'POST',
headers: {
'Training-Key': trainingKey,
'Content-Type': 'application/json'
}
};
const data = {
images: [
{
name: 'xxx',
contents: 'iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAEklEQVR42mP8z8AARKiAkQaCAFxlCfyG/gCwAAAAAElFTkSuQmCC',
tagIds: [],
regions: []
}
],
tagIds: []
}
const req = http.request(options, res => {
...
})
req.write(JSON.stringify(data));
req.end();
Response:
BODY: { "statusCode": 404, "message": "Resource not found" }
No more data in response.
I got it working using the "API testing console" feature, so I can help you to identify your issue (but sorry, I'm not expert in node.js so I will guide you with C# code)
Format of content for API
You are right, the documentation is not clear about the content the API is waiting for. I made some search and found a project in a Microsoft's Github repository called Cognitive-CustomVision-Windows, here.
What is saw is that they use a class called ImageFileCreateEntry whose signature is visible here:
public ImageFileCreateEntry(string name = default(string), byte[] contents = default(byte[]), IList<System.Guid> tagIds = default(IList<System.Guid>))
So I guessed it's using a byte[].
You can also see in their sample how they did for this "batch" mode:
// Or uploaded in a single batch
var imageFiles = japaneseCherryImages.Select(img => new ImageFileCreateEntry(Path.GetFileName(img), File.ReadAllBytes(img))).ToList();
trainingApi.CreateImagesFromFiles(project.Id, new ImageFileCreateBatch(imageFiles, new List<Guid>() { japaneseCherryTag.Id }));
Then this byte array is serialized with Newtonsoft.Json: if you look at their documentation (here) it says that byte[] are converted to String (base 64 encoded). That's our target.
Implementation
As you mentioned that you tried with base64 encoded image, I gave it a try to check. I took my StackOverflow profile picture that I downloaded locally. Then using the following, I got the base64 encoded string:
Image img = Image.FromFile(#"\\Mac\Home\Downloads\Picto.jpg");
byte[] arr;
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
arr = ms.ToArray();
}
var content = Convert.ToBase64String(arr);
Later on, I called the API with no tags to ensure that the image is posted and visible:
POST https://southcentralus.api.cognitive.microsoft.com/customvision/v2.2/Training/projects/MY_PROJECT_ID/images/files HTTP/1.1
Host: southcentralus.api.cognitive.microsoft.com
Training-Key: MY_OWN_TRAINING_KEY
Content-Type: application/json
{
"images": [
{
"name": "imageSentByApi",
"contents": "/9j/4AAQSkZJRgA...TOO LONG FOR STACK OVERFLOW...",
"tagIds": [],
"regions": []
}
],
"tagIds": []
}
Response received: 200 OK
{
"isBatchSuccessful": true,
"images": [{
"sourceUrl": "imageSentByApi",
"status": "OK",
"image": {
"id": "GENERATED_ID_OF_IMAGE",
"created": "2018-11-05T22:33:31.6513607",
"width": 328,
"height": 328,
"resizedImageUri": "https://irisscuprodstore.blob.core.windows.net/...",
"thumbnailUri": "https://irisscuprodstore.blob.core.windows.net/...",
"originalImageUri": "https://irisscuprodstore.blob.core.windows.net/..."
}
}]
}
And my image is here in Custom Vision portal!
Debugging your code
In order to debug, you should 1st try to submit your content again with tagIds and regions arrays empty like in my test, then provide the content of the API reply
I am using google vision api to recognise text from image. The image in Japanese language.
But response is not in Japanese language it is in English. Can any body tell me how to change english to Japanese.
Add a language hint in the AnnotateImageRequest. For example, in C#:
var responses = vision.Images.Annotate(
new BatchAnnotateImagesRequest()
{
Requests = new[] {
new AnnotateImageRequest() {
Features = new [] { new Feature() { Type =
"TEXT_DETECTION"}},
Image = new Image() { Content = imageContent },
ImageContext = new ImageContext()
{
LanguageHints = new string[] { "ja" }
}
}
}
}).Execute();
Try Type "DOCUMENT_TEXT_DETECTION" instead of "TEXT_DETECTION",
For example (in Java),
Feature feat = Feature.newBuilder().setType(Type.DOCUMENT_TEXT_DETECTION).build();