I am sending a valid JSON object constructed with ArduinoJSON to a RaspberryPi running node.js with the library https://github.com/natevw/node-nrf over a nrf24 radio link. The node.js server receives the data seemingly without problem. But for some reason I can't JSON.parse() the object (or buffer?) without getting a SyntaxError: Unexpected token in JSON at position ...
For some reason the node-nrf library receives data backwards, so i need to reverse the order of the bytes with Array.prototype.reverse.call(d), and then console.log(d.toString()) and everything seems fine. In this case, the console gets Got data: [{"key":"a1","value":150}]. At this point, the content of the buffer looks like : Buffer 5b 7b 22 6b 65 79 22 3a 22 61 31 22 2c 22 76 61 6c 75 65 22 3a 31 35 30 7d 5d 00 00 00 00 00 00. Those are the actual 32 bytes that the nrf24 buffer contains i guess.
But then, when the code gets to the JSON.parse() call, i get SyntaxError: Unexpected token in JSON at position 26. This is the position my object data actually ends in the buffer.
I've also experimented with .toJSON() and JSON.stringify() , but can't actually get a proper object to use ( ie. obj.key, obj.value). It's only returning undefined properties. It seems to me the parsing fails when it reaches the end of the object. I've also tried to match the buffer size with the actual size of the message just to see if the parsing would succeed to no avail !
I am probably very mixed up in concepts of buffers, streams, pipes and objects ... what am i doing wrong ?
I need ideas (or fixes!)
Code running on the receiving end in node.js:
var nrf = NRF24.connect(spiDev, cePin, irqPin);
nrf.printDetails();
nrf.channel(0x4c).transmitPower('PA_MIN').dataRate('1Mbps').crcBytes(2).autoRetransmit({count:15, delay:4000}).begin(function () {
var rx = nrf.openPipe('rx', pipes[0]);
rx.on('data', d => {
let obj = Array.prototype.reverse.call(d);
try {
console.log("Got data: ", d.toString());
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});
});
I don't think the problem is here in forming the JSON message. But for reference purposes, this is the code running on the Arduino:
#include <SPI.h>
#include <nRF24L01.h>
#include <RF24.h>
#include <ArduinoJson.h>
const uint64_t addresses[5] = {0x65646f4e32LL,0x65646f4e31LL} ;
RF24 radio(7,8);
char output[32];
void setup()
{
Serial.begin(115200);
radio.begin();
radio.setAutoAck(true);
radio.setDataRate(RF24_1MBPS);
radio.enableDynamicPayloads();
radio.setCRCLength(RF24_CRC_16);
radio.setChannel(0x4c);
radio.setPALevel(RF24_PA_MAX);
radio.openWritingPipe(addresses[0]);
}
void loop()
{
const int capacity = JSON_ARRAY_SIZE(2) + 2*JSON_OBJECT_SIZE(2);
StaticJsonBuffer<capacity> jb;
JsonArray& arr = jb.createArray();
JsonObject& obj1 = jb.createObject();
obj1["key"] = "a1";
obj1["value"] = analogRead(A1);
arr.add(obj1);
arr.printTo(output);
bool ok = radio.write(&output, sizeof(output));
arr.printTo(Serial);
Serial.print(ok);
delay(1000);
}
Most likely you have NUL characters at the end of the string. JSON.parse will refuse to parse it.
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
JSON.parse(obj);
Uncaught SyntaxError: Unexpected token in JSON at position 26
If you remove the NUL characters, parsing succeeds:
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
obj = obj.replace(/\0/g, "");
JSON.parse(obj);
Parse 'buffer data' into 'string' like:
rx.on('data', d => {
try {
let obj = d.toString();
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});
Related
Im trying to implement a simple node.js stream multiplexer/demultiplexer.
Currently while implementing the multiplexing mechanism i noticed that the output of the multiplexer gets concatenated into a single chunk.
const { PassThrough, Transform } = require("stream");
class Mux extends PassThrough {
constructor(options) {
super(options);
}
input(id, options) {
let encode = new Transform({
transform(chunk, encoding, cb) {
let buf = Buffer.alloc(chunk.length + 1);
buf.writeUInt8(id, 0);
chunk.copy(buf, 1);
cb(null, buf);
},
...options
});
encode.pipe(this);
return encode;
};
};
const mux = new Mux();
mux.on("readable", () => {
console.log("mux >", mux.read())
});
const in1 = mux.input(1);
const in2 = mux.input(2);
in1.write(Buffer.alloc(3).fill(255));
in2.write(Buffer.alloc(3).fill(127));
Output looks like this: mux > <Buffer 01 ff ff ff 02 7f 7f 7f>.
I would have thought that i receive two console.log outputs.
Expected output:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Can some one explains why i only get one "readable" event and a concatenated chunk from both inputs?
Use the data event and read from the callback:
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer.
mux.on("data", d => {
console.log("mux >", d)
});
This now yields:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Why readable is only emitted once is explained in the docs as well:
The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.
data and readable behave differently. In your case readable is never emitted until the end of the stream data has been reached, which returns all the data at once. data is emitted on each available chunk.
To create a utf-8 buffer from a string in javascript on the web you do this:
var message = JSON.stringify('ping');
var buf = new TextEncoder().encode(message).buffer;
console.log('buf:', buf);
console.log('buf.buffer.byteLength:', buf.byteLength);
This logs:
buf: ArrayBuffer { byteLength: 6 }
buf.buffer.byteLength: 6
However in Node.js if I do this:
var nbuf = Buffer.from(message, 'utf8');
console.log('nbuf:', nbuf);
console.log('nbuf.buffer:', nbuf.buffer);
console.log('nbuf.buffer.byteLength:', nbuf.buffer.byteLength);
it logs this:
nbuf: <Buffer 22 70 69 6e 67 22>
nbuf.buffer: ArrayBuffer { byteLength: 8192 }
nbuf.buffer.byteLength: 8192
The byteLength is way to high. Am I doing something wrong here?
Thanks
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer
ArrayBuffer.prototype.byteLength Read only
The size, in bytes, of the array. This is established when the array is constructed and cannot be changed. Read only.
It seems you should not assume byteLength property to be equal to the actual byte length occupied by the elements in the ArrayBuffer.
In order to get the actual byte length, I suggest using Buffer.byteLength(string[, encoding])
Documentation: https://nodejs.org/api/buffer.html#buffer_class_method_buffer_bytelength_string_encoding
For example,
var message = JSON.stringify('ping');
console.log('byteLength: ', Buffer.byteLength(message));
correctly gives
byteLength: 6
I am having problems with my code architecture and am looking for advice. I am sending multiple read request to a server via udp and printing the read response out. An example of how the request and response look are below. I am getting the response back as one large hex string starting at 0008.. I need a way for the code to know how many addresses were sent to be read and what data size was requested and take that into account on printing out the data. Since the data size can change, I can not just split the string up using a definite value. I am not looking for actual code but rather just some ideas on how one could tackle this.
Request
00 06 - Opcode
00 00 - Block #
02 - Count
34 97 00 20 - Address 1
04 00 - Data Size 1 (bytes)
30 97 00 20 - Address 2
01 00 - Data Size 2 (bytes)
Response- 00080001e60300009
00 08 - Opcode
00 01 - Block #
e6 03 00 00 - Data 1
09 - Data 2
What I am printing right now- e603000009
How I want it printed - Address 1 = e6030000
Address 2 = 09 ...
Address 3 = 00 00
etc.
(it would know what it is a new data by the data size that was requested and the # of addresses that were requested)
Part of code where I am sending a read request and emitting it to html
app.post('/output3', function(req, res){
res.sendFile(__dirname + '/upload3.html');
//Define the host and port values of UDP
var HOST= '192.168.0.136';
var PORT= 69;
var io = require('socket.io')(http);
//Mulitple parameters
//setInterval will send message constantly.
var client= dgram.createSocket('udp4');
var counter = 0;
//array for addresses
var address=[];
//array for size
var size=[];
//iterate through all addresses and convert to little endian
for (var i=0; i<req.body.address.length; i++) {
var n= req.body.address[i];
var s = n.toString(16).match(/.{1,2}/g);
address[i]= s.reverse().join("").toString(16); // ==> "0x985c0020" (= 2556166176)
}
//iterate through all size and make hex strings and little endian
for (var i=0; i<req.body.size.length; i++) {
function pad(number, length) {
var my_string = '' + number;
while (my_string.length < length) {
my_string = '0' + my_string;
}
return my_string;
}
var n2= pad(req.body.size[i], 4);
var s2 = n2.toString(16).match(/.{1,2}/g);
size[i]= s2.reverse().join("").toString(16);
}
//empty string to add address and size together
var x='';
for (var i=0; i<req.body.address.length; i++) {
x += address[i]+size[i];
}
console.log(req.body.size);
var mrq= read(x);
//Open listener to recieve intial message and print in to webpage
io.on('connection', function(socket){
var mrq= read(x);
io.emit('mrq', mrq);
});
function read() {
// Memory Read Request is a 16-bit word with a value of 6
var message = '0006'
// Block number is a 16-bit word that we will always set to 0
message += '0000'
// Count is variable, and calculated by the size of the parameter list
message += '02'
for (var i=0; i < arguments.length; i++) {
message += arguments[i];
}
return message;
}
var message = new Buffer(mrq, 'hex');
counter++;
var loop= setInterval(function () {
//Sends packets to UDP and setInterval sends packets again for specific time
client.send(message, 0, message.length, PORT, HOST, function (err, bytes) {
if (err) throw err;
});
if (counter === 1000000000000000) {
clearInterval(loop);
}
}, 1/50);
//Open listener to recieve intial message and print in to webpage
io.on('connection', function(socket){
client.on('message', function( message, rinfo ){
//hex string
var temp = message.readUIntBE(0, 2);
//console.log(message.toString(16));
io.emit('temp', temp);
});
});
Showing your current code would help us answer this better.
but in general, the way would be to use a write stream and push your string out in chunks, rather than as a whole block.
The following is an excerpt from the stream-handbook by James Halliday (aka substack):
Here's an example of using .read(n) to buffer stdin into 3-byte
chunks:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
});
Running this example gives us incomplete data!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume1.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
This is because there is extra data left in internal buffers and we
need to give node a "kick" to tell it that we are interested in more
data past the 3 bytes that we've already read. A simple .read(0) will
do this:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
process.stdin.read(0);
});
Now our code works as expected in 3-byte chunks!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume2.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
When I change the example to 2-byte read chunks, it breaks - presumably because the internal buffer still has data queued up. But that wouldn't happen if read(0) kicked off a 'readable' event each time it was called. Looks like it only happens after all the input is finished.
process.stdin.on('readable', function () {
var buf = process.stdin.read(2);
console.dir(buf);
process.stdin.read(0);
});
What does this code do under the hood? It seems like read(0) queues another 'readable' event, but only at the end of input. I tried reading through the readable stream source, but it's pretty heavy-lifting. Does anyone know how this example works?
There is a code I found here https://github.com/substack/stream-handbook which reads 3 bytes from stream. And I do not understand how it works.
process.stdin.on('readable', function() {
var buf = process.stdin.read(3);
console.log(buf);
process.stdin.read(0);
});
Being called like this:
(echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume.js
It returns:
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
First of all, why do I need this .read(0) thing? Isn't stream has a buffer where the rest of data is stored until I request it by .read(size)? But without .read(0) it'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
Why?
The second is these sleep 1 instructions. If I call the script without it
(echo abc; echo def; echo ghi) | node consume.js
It'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
no matter will I use .read(0) or not. I don't understand this completely. What logic is used here to print such a result?
I am not sure about what exactly the author of https://github.com/substack/stream-handbook tried to show using the read(0) approach, but IMHO this is the correct approach:
process.stdin.on('readable', function () {
let buf;
// Every time when the stream becomes readable (it can happen many times),
// read all available data from it's internal buffer in chunks of any necessary size.
while (null !== (buf = process.stdin.read(3))) {
console.dir(buf);
}
});
You can change the chunk size, pass the input either with sleep or without it...
I happened to learn NodeJS stream module these days. Here are some comments inside Readable.prototype.read function:
// if we're doing read(0) to trigger a readable event, but we
// already have a bunch of data in the buffer, then just trigger
// the 'readable' event and move on.
It said that after called .read(0), stream would just trigger (using the process.nextTick) another readable event if stream was not ended.
function emitReadable(stream) {
// ...
process.nextTick(emitReadable_, stream);
// ...
}