AWS IoT Button "JSON.stringify" - node.js

Dear all AWS IoT developers
I realized that I can only get three parameters as illustrated from the code below:
// Amazon's IoT button sends three parameters when it is pressed ...
var body = JSON.stringify({
clickType: event.clickType, // (string) the type of press; can be "SINGLE", "DOUBLE" or "LONG"
serialNumber: event.serialNumber, // (string) device's serial number, from the back of the button.
batteryVoltage: event.batteryVoltage // (string) device's voltage level in millivolts. e.g. "1567mV"
});
My question is: is there any way to get other parameters utilizing from
JSON.stringify.
PS: here is the complete code by this link.

According to this post by Stackoverflow link, json can only specify these parameters. Other parameters such as ""lat/log"" would be hard define.
A suggested solution for finding lat/long would probably by writing nodeJS code utilizing from npm googlemaps.

Related

How to define an Alexa Play Directive with MetaData with node.js

I have implemented an Alexa Audio Player skill which plays the audio just fine, but when played on an Echo Show, the name of the song does not show on the display.
I see the documents on Amazon (https://amzn.to/2xzpH4u) refer to a play directive which includes MetaData such as the background image and such, but I'm not sure how to set this up in node.js.
This is the code snippet from my Play intent handler:
if (this.event.request.type === 'IntentRequest' || this.event.request.type === 'LaunchRequest') {
var cardTitle = streamDynamic.subtitle;
var cardContent = streamDynamic.cardContent;
var cardImage = streamDynamic.image;
this.response.cardRenderer(cardTitle, cardContent, cardImage);
}
this.response.speak('Enjoy.').audioPlayerPlay('REPLACE_ALL', streamDynamic.url, streamDynamic.url, null, 0);
this.emit(':responseReady');
In your If statement meaning video rendering is supported you build the content of the metadata for the card that is rendered on the device.
So following the documentation cardTitle, Content and Image all have got to be what you are looking for the device to render as a card. You are returning it to be rendered in the this.response statement when all the resources have been provided.
In the example code from Amazon https://github.com/alexa/skill-sample-nodejs-audio-player/blob/mainline/single-stream/lambda/src/audioAssets.js notice how the card assets are specified. Follow this example and look at the whole project for any other pieces you may be missing.

Understanding basic websocket API with node.js and "ws"-package (API: bitfinex)

I'm trying to figure out basic websocket communication using node.js, the "ws"-package (which seems to be a very popular websocket package from npmjs.com) and the bitfinex.com (a cryptocurrency exchange) websocket API.
I want to read the public Ticker for a certain currency-pair, the docs are here: https://docs.bitfinex.com/v2/reference#ws-public-ticker
My result so far is working but is still much different from what I am supposed to get according to the docs.
I am working with this code snippet taken from the documentation linked above:
const ws = require('ws')
const w = new ws('wss://api.bitfinex.com/ws/2')
w.on('message', (msg) => {
console.log(msg)
})
let msg = JSON.stringify({
event: 'subscribe',
channel: 'ticker',
symbol: 'tBTCUSD'
})
w.on('open', () => {
w.send(msg)
})
Which works so far by outputting to the console the message from the subscribed channel:
[1,[14873,23.49464465,14874,61.09031263,1087,0.0789,14872,56895.20497085,15500,13891]]
But now, and here is the issue, in the docs the response looks different. How would I determine which number is what? I should be able to get all kinds of more information from the response, no?
The given example response looks like this:
// response - trading
{
event: "subscribed",
channel: "ticker",
chanId: CHANNEL_ID,
pair: "BTCUSD"
}
How does this relate to that array of numbers I get? How would I for example read the "pair:" field ("BTCUSD") or any of the other listed fields, like (BID, BID_PERIOD, VOLUME, HIGH, LOW etc.)? Am I missing something obvious?
I know this is a lot to ask at once but maybe someone knows one or two good examples or hints to enlighten me. Thanks in advance!
Kind regards,
s
The overall websocket scheme for this API is described in https://bitfinex.readme.io/v2/docs/ws-general If you haven't already read that page, now would be a good time to do it.
For your example program you should have seen info and subscribed events as the first two messages from the websocket. info should have been sent as soon as the websocket connection was established, and subscribed should have been sent in response to your subscribe request.
After that, you should see a ticker snapshot message followed by periodic ticker update messages for the channel that you subscribed to. These are the JSON arrays that you're seeing. The format of these messages for a public ticker channel is is described at https://bitfinex.readme.io/v2/reference#ws-public-ticker -- click the Snapshot and Update headings in the dark green 'details' bar to see the definitions. In this case snapshots and updates use the same format:
[ CHANNEL_ID,
[ FRR, BID, BID_PERIOD, BID_SIZE, ASK, ASK_PERIOD, ASK_SIZE,
DAILY_CHANGE, DAILY_CHANGE_PERC, LAST_PRICE, VOLUME, HIGH, LOW
]
]
with meanings as described in the 'Stream Fields' table at the above URL.
You can parse these messages as JSON strings and access the field values just as you would for any array.
It's a little strange that the API sends these as arrays rather than as objects with named attributes. I imagine they're looking to keep these messages compact because they make up the bulk of the traffic.

Setting the source number on nodejs with Plivo

If i want make a call using the Plivo API i need to have a source number and destination number. However, the source number has to be linked with my Plivo account. But when reading their tutorials, they mentioned that buying a Plivo number is optional and only needed if you want to receive calls. I only want to sent calls. Anyone know how to make a call with free account using Plivo in nodejs?
var params = {
'to': '2222222222', // The phone numer to which the all has to be placed
'from' : '1111111111', // The phone number to be used as the caller id
'answer_url' : "https://some-url/speak.xml", // The URL invoked by Plivo when the outbound call is answered
'answer_method' : "GET", // The method used to call the answer_url
};
Apparently, you could use any number as source. Like it doesn't have to be registered. So, 'from' : '1111111111' would actually work.

Chrome screen sharing get monitor info

I implemented chrome extension which using chrome.desktopCapture.chooseDesktopMedia to retrieve screen id.
This is my background script:
chrome.runtime.onConnect.addListener(function (port) {
port.onMessage.addListener(messageHandler);
// listen to "content-script.js"
function messageHandler(message) {
if(message == 'get-screen-id') {
chrome.desktopCapture.chooseDesktopMedia(['screen', 'window'], port.sender.tab, onUserAction);
}
}
function onUserAction(sourceId) {
//Access denied
if(!sourceId || !sourceId.length) {
return port.postMessage('permission-denie');
}
port.postMessage({
sourceId: sourceId
});
}
});
I need to get shared monitor info(resolution, landscape or portrait).
My question is: If customer using more than one monitor, how can i determine which monitor he picked?
Can i add for example "system.display" permissions to my extension and get picked monitor info from "chrome.system.display.getInfo"?
You are right. You could add system.display permission and call chrome.system.display.getDisplayLayout(callbackFuncion(DisplayLayout)) and handle the DisplayLayout.position in the callback to get the layout and the chrome.system.display.getInfo to handle the array of displayInfo in the callback. You should look for 'isPrimary' value
This is a year old question, but I came across it since I was after the same information and I finally managed to figure how you can identify which monitor the user selected for screen-sharing in Chrome.
First of all: this information will not come from the extension that you probably built for screen-sharing in Chrome, because:
The chrome.desktopCapture.chooseDesktopMedia API callback only returns a sourceId, which is a string that represents a stream id, that you can then use to call the getMediaSource API to build the media stream.
The chrome.system.display.getInfo will give you a list of the displays, yes, but from that info you can't tell which one is being shared, and there is no way to match the sourceId with any of the fields returned for each display.
So... the solution I've found comes from the MediaStream object itself. Once you have the stream, after calling getMediaSource, you need to get the video track, and in there you will find a property called "label". This label gives you an idea of which screen the user picked.
You can get the video track with something like:
const videoTrack = mediaStream.getVideoTracks()[0];
(Check the getVideoTracks API here: https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getVideoTracks).
If you print that object, you will see the "label" field. In Chrome screen 1 shows as "0:0", whereas screen 2 shows as "1:0", and I assume screen i would be "i-1:0" (I've only tested with 2 screens).
Here is a capture of that object printed in the console:
And not only works for Chrome, but for other browsers that implement it! In Firefox they show up as "Screen i":
Also, if you check Chrome chrome://webrtc-internals you'll see this is what they show in the addStream event:
And that's it! It's not ideal, since this is a label, more than a real screen identifier, but well, it's something to work with. Once you have the screen identified, in Chrome you can work with the chrome.system.display.getInfo to get information for that display.

How to send data to APM from NodeJs using mavlink?

How to write params to ArduPilot (APM) from NodeJS using node-mavlink?
For example to change geofence enable?
You should read the documentation for the mavlink parameter protocol here: http://qgroundcontrol.org/mavlink/parameter_protocol
The basic idea is that you send a PARAM_SET message to set a parameter value, then wait for an ACK in the form of a PARAM_VALUE message that has the value you just set.
The documentation for the PARAM_SET and PARAM_VALUE messages is in the mavlink defintion XML file: https://github.com/omcaree/node-mavlink/blob/c30f8a63ca6a1ebc1669fefcd07bb3780540e41b/src/mavlink/message_definitions/v1.0/common.xml#L966
Here's an (untested) example of creating and sending a PARAM_SET message to enable the geofence.
I checked the ArduCopter/APM:Copter parameter documentation to learn that the parameter you want is called FENCE_ENABLE, and that a value of 1 means it's enabled. I checked the mavlink message definition for the MAV_PARAM_TYPE enum to learn the enum value for the param_type argument to specify a UINT_8 (my best guess for the type of a boolean parameter).
myMAV.createMessage(
"PARAM_SET",
{
'target_system': 1,
'target_component': 1,
'param_id': 'FENCE_ENABLE',
'param_value': 1.0,
'param_type': 1
},
function(message) {
serialport.write(message.buffer);
});
(See the "Initialization" section of the node-mavlink documentation for information on how to load and initialize the library.)
I haven't written the code to receive the ACK from the drone, but the "Parsing Data" section of the documentation will guide you on how to do that.
I have build an node-based ground control station https://github.com/kvenux/nodegcs
Please feel free to use it.
To make the geofence enable, you need to create a message to set the related param.
Hope it helps.

Resources