I am trying to display an Animation Card using the Bot Framework with a text, GIF and buttons. It works perfectly on the bot emulator but does not show up on Messenger. Any ideas?
Code
/**Send the question with the level information if available, the index and the Math expression along with a countdown timer as GIF attachment */
let message = new builder.Message(session)
.text(level ? level + ' \n' + strings.question : strings.question, dialogData.index + 1, question.expression)
.addAttachment(
new builder.AnimationCard(session)
.media([{
profile: "image/gif",
url: "https://media.giphy.com/media/l3q2silev6exF53pu/200w.gif"
}])
.buttons(buttons)
// .toAttachment()
)
session.send(message)
On Emulator
On Messenger
Any ideas what might be off? Thank you in advance for your suggestions
UPDATE 1
This s the error on my console
{"error":{"message":"(#100) Param [elements][0][title] must be a non-empty UTF-8 encoded string","type":"OAuthException","code":100,"fbtrace_id":"CLEcx63w+4N"}}
You need to include a title with your animation card, Messenger requires all cards to include a title. Also, animation cards work a little differently in messenger in that they send a message with the .gif followed by a card with the title and the buttons, rather than having them all together in a nice card like in the emulator.
In your use case, I would use first line saying what level it is as the title, and the question as the subtitle. This text will appear below the gif instead of above it, though, so it's a little different layout than what you have now.
let message = new builder.Message(session)
.addAttachment(
new builder.AnimationCard(session)
.title(level ? level : 'Level 0')
.subtitle(strings.question)
.media([{
profile: "image/gif",
url: "https://media.giphy.com/media/l3q2silev6exF53pu/200w.gif"
}])
.buttons(buttons)
)
session.send(message)
Related
I try to test my action in the Google Actions Simulator. Unfortunately, the simulator does not seems to recognize the difference between the phone surface and the smart speaker surface within the simulator.
I tried to console log the screentest variable. In the logs both the phone & speaker surface show 'true', which clearly is not correct. I also checked to 'conversation' data log. Both phone & speaker output contain SCREEN_OUTPUT.
app.intent('Default Welcome Intent', (conv) => {
let screentest = conv.available.surfaces.capabilities.has('actions.capability.SCREEN_OUTPUT')
console.log(screentest)
if (screentest === true) {
conv.add('Text with screen')
} else if (screentest === false) {
conv.add('Text without screen')
} else {
conv.add('impossible')
}
})
Expected results: when using the speaker surface inside the simulator, the output of the assistant should be 'Text without Screen'.
Actual results: Both phone & speaker surface inside the simulator generate the answer: 'Text with screen'.
The issue is that you're not quite checking for surfaces correctly.
There are two sets of capabilities reported:
The capabilities available on the surface the user is currently using. If you're using the actions-on-google library, these are available using conv.surface.capabilties.has()
The capabilities available on any surface that the user has connected to their account. These are available using conv.available.surfaces.capabilities.has() if you're using the actions-on-google library.
You're currently using the second one when you should be checking the first one to see what the user is currently using.
You'll want to use the second in case there is something that you want to display to make sure they can handle it before suggesting you switch to it.
I want to display image beside button rendered using builder.Prompts. Please find the code below:
builder.Prompts.choice(session, 'Is it useful ?', 'Yes|No', { maxRetries: 0 });
How do I add image url or emoticon beside these options ? Is it possible to add an image here ?
Thanks
On which channel is your bot active? Using images isn't supported on most channels, the use of emojis is supported. You can just add emojis in your code since it is just unicode, but the rendering can be different per channel. Or use a package like node-emoji to insert them.
const options = ['Option 1 😉', 'Option 2 😎'];
builder.Prompts.choice(session, 'Is it useful ?', options, { maxRetries: 0 });
Instead of using a string separated by pipes (|), I prefer to use an array with all choice options.
I have implemented an Alexa Audio Player skill which plays the audio just fine, but when played on an Echo Show, the name of the song does not show on the display.
I see the documents on Amazon (https://amzn.to/2xzpH4u) refer to a play directive which includes MetaData such as the background image and such, but I'm not sure how to set this up in node.js.
This is the code snippet from my Play intent handler:
if (this.event.request.type === 'IntentRequest' || this.event.request.type === 'LaunchRequest') {
var cardTitle = streamDynamic.subtitle;
var cardContent = streamDynamic.cardContent;
var cardImage = streamDynamic.image;
this.response.cardRenderer(cardTitle, cardContent, cardImage);
}
this.response.speak('Enjoy.').audioPlayerPlay('REPLACE_ALL', streamDynamic.url, streamDynamic.url, null, 0);
this.emit(':responseReady');
In your If statement meaning video rendering is supported you build the content of the metadata for the card that is rendered on the device.
So following the documentation cardTitle, Content and Image all have got to be what you are looking for the device to render as a card. You are returning it to be rendered in the this.response statement when all the resources have been provided.
In the example code from Amazon https://github.com/alexa/skill-sample-nodejs-audio-player/blob/mainline/single-stream/lambda/src/audioAssets.js notice how the card assets are specified. Follow this example and look at the whole project for any other pieces you may be missing.
Dear all AWS IoT developers
I realized that I can only get three parameters as illustrated from the code below:
// Amazon's IoT button sends three parameters when it is pressed ...
var body = JSON.stringify({
clickType: event.clickType, // (string) the type of press; can be "SINGLE", "DOUBLE" or "LONG"
serialNumber: event.serialNumber, // (string) device's serial number, from the back of the button.
batteryVoltage: event.batteryVoltage // (string) device's voltage level in millivolts. e.g. "1567mV"
});
My question is: is there any way to get other parameters utilizing from
JSON.stringify.
PS: here is the complete code by this link.
According to this post by Stackoverflow link, json can only specify these parameters. Other parameters such as ""lat/log"" would be hard define.
A suggested solution for finding lat/long would probably by writing nodeJS code utilizing from npm googlemaps.
I implemented chrome extension which using chrome.desktopCapture.chooseDesktopMedia to retrieve screen id.
This is my background script:
chrome.runtime.onConnect.addListener(function (port) {
port.onMessage.addListener(messageHandler);
// listen to "content-script.js"
function messageHandler(message) {
if(message == 'get-screen-id') {
chrome.desktopCapture.chooseDesktopMedia(['screen', 'window'], port.sender.tab, onUserAction);
}
}
function onUserAction(sourceId) {
//Access denied
if(!sourceId || !sourceId.length) {
return port.postMessage('permission-denie');
}
port.postMessage({
sourceId: sourceId
});
}
});
I need to get shared monitor info(resolution, landscape or portrait).
My question is: If customer using more than one monitor, how can i determine which monitor he picked?
Can i add for example "system.display" permissions to my extension and get picked monitor info from "chrome.system.display.getInfo"?
You are right. You could add system.display permission and call chrome.system.display.getDisplayLayout(callbackFuncion(DisplayLayout)) and handle the DisplayLayout.position in the callback to get the layout and the chrome.system.display.getInfo to handle the array of displayInfo in the callback. You should look for 'isPrimary' value
This is a year old question, but I came across it since I was after the same information and I finally managed to figure how you can identify which monitor the user selected for screen-sharing in Chrome.
First of all: this information will not come from the extension that you probably built for screen-sharing in Chrome, because:
The chrome.desktopCapture.chooseDesktopMedia API callback only returns a sourceId, which is a string that represents a stream id, that you can then use to call the getMediaSource API to build the media stream.
The chrome.system.display.getInfo will give you a list of the displays, yes, but from that info you can't tell which one is being shared, and there is no way to match the sourceId with any of the fields returned for each display.
So... the solution I've found comes from the MediaStream object itself. Once you have the stream, after calling getMediaSource, you need to get the video track, and in there you will find a property called "label". This label gives you an idea of which screen the user picked.
You can get the video track with something like:
const videoTrack = mediaStream.getVideoTracks()[0];
(Check the getVideoTracks API here: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getVideoTracks).
If you print that object, you will see the "label" field. In Chrome screen 1 shows as "0:0", whereas screen 2 shows as "1:0", and I assume screen i would be "i-1:0" (I've only tested with 2 screens).
Here is a capture of that object printed in the console:
And not only works for Chrome, but for other browsers that implement it! In Firefox they show up as "Screen i":
Also, if you check Chrome chrome://webrtc-internals you'll see this is what they show in the addStream event:
And that's it! It's not ideal, since this is a label, more than a real screen identifier, but well, it's something to work with. Once you have the screen identified, in Chrome you can work with the chrome.system.display.getInfo to get information for that display.