Simple multiplexing node.js stream - chunk concatenated issue - node.js

Im trying to implement a simple node.js stream multiplexer/demultiplexer.
Currently while implementing the multiplexing mechanism i noticed that the output of the multiplexer gets concatenated into a single chunk.
const { PassThrough, Transform } = require("stream");
class Mux extends PassThrough {
constructor(options) {
super(options);
}
input(id, options) {
let encode = new Transform({
transform(chunk, encoding, cb) {
let buf = Buffer.alloc(chunk.length + 1);
buf.writeUInt8(id, 0);
chunk.copy(buf, 1);
cb(null, buf);
},
...options
});
encode.pipe(this);
return encode;
};
};
const mux = new Mux();
mux.on("readable", () => {
console.log("mux >", mux.read())
});
const in1 = mux.input(1);
const in2 = mux.input(2);
in1.write(Buffer.alloc(3).fill(255));
in2.write(Buffer.alloc(3).fill(127));
Output looks like this: mux > <Buffer 01 ff ff ff 02 7f 7f 7f>.
I would have thought that i receive two console.log outputs.
Expected output:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Can some one explains why i only get one "readable" event and a concatenated chunk from both inputs?

Use the data event and read from the callback:
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer.
mux.on("data", d => {
console.log("mux >", d)
});
This now yields:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Why readable is only emitted once is explained in the docs as well:
The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.
data and readable behave differently. In your case readable is never emitted until the end of the stream data has been reached, which returns all the data at once. data is emitted on each available chunk.

Related

Publishing multiple buffers in mqttjs

So I want to send some bytes from a NodeJs script to my mosquitto server over mqtt. For this I am using the client library MQTT.js. To send raw data, the library supports Buffers (also Strings).
All of it works just fine until I try to publish multiple bytes (buffers) in a row. This will always result in publishing only the last buffer but multiple times. When I log the same buffers to the console instead of using mqttjs, I get my expected results (increment from 0xfa to 0xff). However, publishing normal Strings in a row works without trouble. The behaviour of this is inexplicable for me.
Not quite sure if this is a problem from mqttjs, or if I’m just dumb. Would appreciate a short explenation on what i'm doing wrong since i did not find any related issues.
var buf = Buffer.from([0xfa]);
client.on('connect', function () {
client.subscribe('LED', function (err) {
if (!err) {
for (; buf[0] < 0xff; buf[0]++) {
client.publish('LED', buf);
}
}
})
})
Expected message (Result from logging to the console instead of publishing):
<Buffer fa>
<Buffer fb>
<Buffer fc>
<Buffer fd>
<Buffer fe>
Message that mosquitto receives:
<Buffer ff>
<Buffer ff>
<Buffer ff>
<Buffer ff>
<Buffer ff>
A funny discovery i made. If you put the publish before the loop (buf[0] = 0xfa):
client.publish('LED', buf)
for (; buf[0] < 0xff; buf[0]++) {;}
The output is still:
<Buffer ff>
NodeJS is a pass by reference language, this means that the publish method is working on the same instance of the buffer object all the time. You are just passing a reference to the same structure to the publish method.
In this case the for loop is running quicker than the client can send the messages so by the time it gets round to actually packing the buffer's content into the message it has already been updated to be 0xff
You will need to create new buffer objects (or clone the existing values) for each message you want to send.

how can i parse JSON coming from a nrf24 Buffer?

I am sending a valid JSON object constructed with ArduinoJSON to a RaspberryPi running node.js with the library https://github.com/natevw/node-nrf over a nrf24 radio link. The node.js server receives the data seemingly without problem. But for some reason I can't JSON.parse() the object (or buffer?) without getting a SyntaxError: Unexpected token in JSON at position ...
For some reason the node-nrf library receives data backwards, so i need to reverse the order of the bytes with Array.prototype.reverse.call(d), and then console.log(d.toString()) and everything seems fine. In this case, the console gets Got data: [{"key":"a1","value":150}]. At this point, the content of the buffer looks like : Buffer 5b 7b 22 6b 65 79 22 3a 22 61 31 22 2c 22 76 61 6c 75 65 22 3a 31 35 30 7d 5d 00 00 00 00 00 00. Those are the actual 32 bytes that the nrf24 buffer contains i guess.
But then, when the code gets to the JSON.parse() call, i get SyntaxError: Unexpected token in JSON at position 26. This is the position my object data actually ends in the buffer.
I've also experimented with .toJSON() and JSON.stringify() , but can't actually get a proper object to use ( ie. obj.key, obj.value). It's only returning undefined properties. It seems to me the parsing fails when it reaches the end of the object. I've also tried to match the buffer size with the actual size of the message just to see if the parsing would succeed to no avail !
I am probably very mixed up in concepts of buffers, streams, pipes and objects ... what am i doing wrong ?
I need ideas (or fixes!)
Code running on the receiving end in node.js:
var nrf = NRF24.connect(spiDev, cePin, irqPin);
nrf.printDetails();
nrf.channel(0x4c).transmitPower('PA_MIN').dataRate('1Mbps').crcBytes(2).autoRetransmit({count:15, delay:4000}).begin(function () {
var rx = nrf.openPipe('rx', pipes[0]);
rx.on('data', d => {
let obj = Array.prototype.reverse.call(d);
try {
console.log("Got data: ", d.toString());
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});
});
I don't think the problem is here in forming the JSON message. But for reference purposes, this is the code running on the Arduino:
#include <SPI.h>
#include <nRF24L01.h>
#include <RF24.h>
#include <ArduinoJson.h>
const uint64_t addresses[5] = {0x65646f4e32LL,0x65646f4e31LL} ;
RF24 radio(7,8);
char output[32];
void setup()
{
Serial.begin(115200);
radio.begin();
radio.setAutoAck(true);
radio.setDataRate(RF24_1MBPS);
radio.enableDynamicPayloads();
radio.setCRCLength(RF24_CRC_16);
radio.setChannel(0x4c);
radio.setPALevel(RF24_PA_MAX);
radio.openWritingPipe(addresses[0]);
}
void loop()
{
const int capacity = JSON_ARRAY_SIZE(2) + 2*JSON_OBJECT_SIZE(2);
StaticJsonBuffer<capacity> jb;
JsonArray& arr = jb.createArray();
JsonObject& obj1 = jb.createObject();
obj1["key"] = "a1";
obj1["value"] = analogRead(A1);
arr.add(obj1);
arr.printTo(output);
bool ok = radio.write(&output, sizeof(output));
arr.printTo(Serial);
Serial.print(ok);
delay(1000);
}
Most likely you have NUL characters at the end of the string. JSON.parse will refuse to parse it.
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
JSON.parse(obj);
Uncaught SyntaxError: Unexpected token  in JSON at position 26
If you remove the NUL characters, parsing succeeds:
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
obj = obj.replace(/\0/g, "");
JSON.parse(obj);
Parse 'buffer data' into 'string' like:
rx.on('data', d => {
try {
let obj = d.toString();
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});

Why isn't node SerialPort Readline parser working?

I've got an Arduino sending very basic messages:
Serial.print('R');
Serial.println(1);
or
Serial.print('R');
Serial.println(2);
I'm trying to read each line using node.js and the SerialPort module but I get inconsistent results:
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52> R
Data: <Buffer 31 0d 0a> 1
Data: <Buffer 52 32 0d 0a> R2
And here's how I've tried to parse:
this.port = new SerialPort(portName, {
baudRate: baudRate,
autoOpen:false,
flowControl: false,
parser: new Readline("\r\n")
});
this.port.open(function (err) {
if (err) {
return console.log('Error opening port: ', err.message);
}
console.log("port open!");
});
this.port.on('error', function(err) {
console.log('Error: ', err.message);
})
this.port.on('open', function() {
console.log("open event called");
});
this.port.on('data', function (data) {
console.log('Data:', data,data.toString('utf8'));
});
In short: I'm expecting R1, R2 messages coming in consistently, not split up like this:
Data: <Buffer 52> R
Data: <Buffer 31 0d 0a> 1
I'm passing ("\r\n" / 0x0d 0x0a) to Readline. What am I missing ?
How can I get consistent new line parsing using SerialPort in node ?
I think that the solution to your problem requires to bind an event on the parser object, while you're currently listening it on the port object. Data that arrives trough the port is not always terminated by 0x0d 0x0a (*). Those two byte are a string terminator signal for the ReadLine parser only.
Thus, maybe you should write this listener in your code instead:
// I'm not actually sure where parser lives, I'm not
// in the condition of trying by myself...
this.port.parser.on('data', function (data) {
console.log('Data:', data,data.toString('utf8'));
});
Unfortunately, I don't have any suggestion to make the syntax more elegant, and for my standard this solution is more elegant than create a function that redirects bindings for you. It depends on your application though, and at the moment I don't have enough information to suggest a possible better solution.
(*) In the first (wrong) comment that I immediately deleted, I asked why you put both byte as termination to the line 0x0d 0x0a (\r\n), and not simply 0x0a (\n) but the Serial.println method actually writes both bytes by default.

Readable Streams and read(0)

The following is an excerpt from the stream-handbook by James Halliday (aka substack):
Here's an example of using .read(n) to buffer stdin into 3-byte
chunks:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
});
Running this example gives us incomplete data!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume1.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
This is because there is extra data left in internal buffers and we
need to give node a "kick" to tell it that we are interested in more
data past the 3 bytes that we've already read. A simple .read(0) will
do this:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
process.stdin.read(0);
});
Now our code works as expected in 3-byte chunks!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume2.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
When I change the example to 2-byte read chunks, it breaks - presumably because the internal buffer still has data queued up. But that wouldn't happen if read(0) kicked off a 'readable' event each time it was called. Looks like it only happens after all the input is finished.
process.stdin.on('readable', function () {
var buf = process.stdin.read(2);
console.dir(buf);
process.stdin.read(0);
});
What does this code do under the hood? It seems like read(0) queues another 'readable' event, but only at the end of input. I tried reading through the readable stream source, but it's pretty heavy-lifting. Does anyone know how this example works?

Streams read(0) instruction

There is a code I found here https://github.com/substack/stream-handbook which reads 3 bytes from stream. And I do not understand how it works.
process.stdin.on('readable', function() {
var buf = process.stdin.read(3);
console.log(buf);
process.stdin.read(0);
});
Being called like this:
(echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume.js
It returns:
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
First of all, why do I need this .read(0) thing? Isn't stream has a buffer where the rest of data is stored until I request it by .read(size)? But without .read(0) it'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
Why?
The second is these sleep 1 instructions. If I call the script without it
(echo abc; echo def; echo ghi) | node consume.js
It'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
no matter will I use .read(0) or not. I don't understand this completely. What logic is used here to print such a result?
I am not sure about what exactly the author of https://github.com/substack/stream-handbook tried to show using the read(0) approach, but IMHO this is the correct approach:
process.stdin.on('readable', function () {
let buf;
// Every time when the stream becomes readable (it can happen many times),
// read all available data from it's internal buffer in chunks of any necessary size.
while (null !== (buf = process.stdin.read(3))) {
console.dir(buf);
}
});
You can change the chunk size, pass the input either with sleep or without it...
I happened to learn NodeJS stream module these days. Here are some comments inside Readable.prototype.read function:
// if we're doing read(0) to trigger a readable event, but we
// already have a bunch of data in the buffer, then just trigger
// the 'readable' event and move on.
It said that after called .read(0), stream would just trigger (using the process.nextTick) another readable event if stream was not ended.
function emitReadable(stream) {
// ...
process.nextTick(emitReadable_, stream);
// ...
}

Resources