I'm tinkering w/ zwack (https://github.com/paixaop/zwack) and trying to see what(HEX) are being written to the buffer in an effort to try to simulate a FTMS machine/server
The code
onReadRequest(offset, callback) {
debugFTMS('[SupportedPowerRangeCharacteristic] onReadRequest');
let buffer = new Buffer.alloc(6);
let at = 0;
let minimumPower = 0;
buffer.writeInt16LE(minimumPower, at);
at += 2;
let maximumPower = 1000;
buffer.writeInt16LE(maximumPower, at);
at += 2;
let minimumIncrement = 1;
buffer.writeUInt16LE(minimumIncrement, at);
at += 2;
debugFTMS('[SupportedPowerRangeCharacteristic] onReadRequest:' + buffer);
console.log(buffer);
callback(this.RESULT_SUCCESS, buffer);
}
the console.log(buffer) command outputs
<Buffer 00 00 e8 03 01 00>
I would like to roll this up into the debug messages instead. Specifically into this line
debugFTMS('[SupportedPowerRangeCharacteristic] onReadRequest:' + buffer);
but the result I'm getting is some unprintable characters
ftms [SupportedPowerRangeCharacteristic] onReadRequest:� +0ms
You can use util.inspect to format a buffer that way.
const util = require('util');
...
debugFTMS('[SupportedPowerRangeCharacteristic] onReadRequest:' + util.inspect(buffer));
Related
Im trying to implement a simple node.js stream multiplexer/demultiplexer.
Currently while implementing the multiplexing mechanism i noticed that the output of the multiplexer gets concatenated into a single chunk.
const { PassThrough, Transform } = require("stream");
class Mux extends PassThrough {
constructor(options) {
super(options);
}
input(id, options) {
let encode = new Transform({
transform(chunk, encoding, cb) {
let buf = Buffer.alloc(chunk.length + 1);
buf.writeUInt8(id, 0);
chunk.copy(buf, 1);
cb(null, buf);
},
...options
});
encode.pipe(this);
return encode;
};
};
const mux = new Mux();
mux.on("readable", () => {
console.log("mux >", mux.read())
});
const in1 = mux.input(1);
const in2 = mux.input(2);
in1.write(Buffer.alloc(3).fill(255));
in2.write(Buffer.alloc(3).fill(127));
Output looks like this: mux > <Buffer 01 ff ff ff 02 7f 7f 7f>.
I would have thought that i receive two console.log outputs.
Expected output:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Can some one explains why i only get one "readable" event and a concatenated chunk from both inputs?
Use the data event and read from the callback:
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer.
mux.on("data", d => {
console.log("mux >", d)
});
This now yields:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Why readable is only emitted once is explained in the docs as well:
The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.
data and readable behave differently. In your case readable is never emitted until the end of the stream data has been reached, which returns all the data at once. data is emitted on each available chunk.
I am sending a valid JSON object constructed with ArduinoJSON to a RaspberryPi running node.js with the library https://github.com/natevw/node-nrf over a nrf24 radio link. The node.js server receives the data seemingly without problem. But for some reason I can't JSON.parse() the object (or buffer?) without getting a SyntaxError: Unexpected token in JSON at position ...
For some reason the node-nrf library receives data backwards, so i need to reverse the order of the bytes with Array.prototype.reverse.call(d), and then console.log(d.toString()) and everything seems fine. In this case, the console gets Got data: [{"key":"a1","value":150}]. At this point, the content of the buffer looks like : Buffer 5b 7b 22 6b 65 79 22 3a 22 61 31 22 2c 22 76 61 6c 75 65 22 3a 31 35 30 7d 5d 00 00 00 00 00 00. Those are the actual 32 bytes that the nrf24 buffer contains i guess.
But then, when the code gets to the JSON.parse() call, i get SyntaxError: Unexpected token in JSON at position 26. This is the position my object data actually ends in the buffer.
I've also experimented with .toJSON() and JSON.stringify() , but can't actually get a proper object to use ( ie. obj.key, obj.value). It's only returning undefined properties. It seems to me the parsing fails when it reaches the end of the object. I've also tried to match the buffer size with the actual size of the message just to see if the parsing would succeed to no avail !
I am probably very mixed up in concepts of buffers, streams, pipes and objects ... what am i doing wrong ?
I need ideas (or fixes!)
Code running on the receiving end in node.js:
var nrf = NRF24.connect(spiDev, cePin, irqPin);
nrf.printDetails();
nrf.channel(0x4c).transmitPower('PA_MIN').dataRate('1Mbps').crcBytes(2).autoRetransmit({count:15, delay:4000}).begin(function () {
var rx = nrf.openPipe('rx', pipes[0]);
rx.on('data', d => {
let obj = Array.prototype.reverse.call(d);
try {
console.log("Got data: ", d.toString());
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});
});
I don't think the problem is here in forming the JSON message. But for reference purposes, this is the code running on the Arduino:
#include <SPI.h>
#include <nRF24L01.h>
#include <RF24.h>
#include <ArduinoJson.h>
const uint64_t addresses[5] = {0x65646f4e32LL,0x65646f4e31LL} ;
RF24 radio(7,8);
char output[32];
void setup()
{
Serial.begin(115200);
radio.begin();
radio.setAutoAck(true);
radio.setDataRate(RF24_1MBPS);
radio.enableDynamicPayloads();
radio.setCRCLength(RF24_CRC_16);
radio.setChannel(0x4c);
radio.setPALevel(RF24_PA_MAX);
radio.openWritingPipe(addresses[0]);
}
void loop()
{
const int capacity = JSON_ARRAY_SIZE(2) + 2*JSON_OBJECT_SIZE(2);
StaticJsonBuffer<capacity> jb;
JsonArray& arr = jb.createArray();
JsonObject& obj1 = jb.createObject();
obj1["key"] = "a1";
obj1["value"] = analogRead(A1);
arr.add(obj1);
arr.printTo(output);
bool ok = radio.write(&output, sizeof(output));
arr.printTo(Serial);
Serial.print(ok);
delay(1000);
}
Most likely you have NUL characters at the end of the string. JSON.parse will refuse to parse it.
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
JSON.parse(obj);
Uncaught SyntaxError: Unexpected token in JSON at position 26
If you remove the NUL characters, parsing succeeds:
let obj = '[{"key":"a1","value":150}]\x00\x00\x00\x00\x00\x00';
obj = obj.replace(/\0/g, "");
JSON.parse(obj);
Parse 'buffer data' into 'string' like:
rx.on('data', d => {
try {
let obj = d.toString();
console.log(obj);
obj = JSON.parse(obj);
console.log(obj);
} catch (err) {
console.error(err)
}
});
in node.js
const buffer = Buffer.from('000000a6', 'hex');
console.log(buffer); // <Buffer 00 00 00 a6>
const bufferString = buffer.toString();
const newBuffer = Buffer.from(bufferString);
console.log(newBuffer); // <Buffer 00 00 00 ef bf bd>
Why convert buffer to string, then convert the string back to buffer, the new buffer is different from the original one?
I tried toString('hex') toString('binary') or other encode, like ascii, etc. All these encodes changed the original buffer.
buffer.toString(encode) use the default encode utf8, Buffer.from(string, encode) also use the default encode utf8, it still different.
How can I convert buffer to string, and convert it back to buffer exactly as the original buffer?
PS: This question comes from when I want to send request body as a buffer. I just send to the server, but the server gets .
PPS: The server is not in my control. So I'm not able to use Buffer.from(string, 'hex') to parse request body buffer.toString('hex').
No need to convert into string like const bufferString = buffer.toString();
const buffer = Buffer.from('000000a6', 'hex');
console.log(buffer); //
const bufferString = buffer;
const newBuffer = Buffer.from(bufferString);
console.log(newBuffer);
In other words, you're converting a buffer to a string and back to a buffer using the same encoding both times and the buffer isn't the same? Here's a function to test whether an input string will stay the same for a given encoding. This should tell you which encodings are likely to be problematic.
var f = (buf, enc) => Buffer.from(buf.toString(enc), enc).every((e,i) => e === buf[i]);
f("hello world", "utf16le"); // ==> returns true
f("hello world", "binary"); // ==> returns true
UTF8 is the default encoding if you don't specify one, so the Buffer sequence in your original answer is very likely bad UTF8 or needs to be escaped in some other way to correctly map it to a UTF8 string.
In BE server, I send binarystring to FE server. Binarystring is in responseText.
//So. I do this.\
xhr2.open('GET', url, true);
xhr2.onload = function()
{
routeResponse = Buffer.from(xhr2.responseText, 'binary');
//init
Buf = '';
//byte by byte output
for (let i = 0; i < routeResponse.length; i++) {
Buf += routeResponse.readUInt8(i).toString(16).toUpperCase();
Buf += ' ';
}
console.log(Buf);
}
But It's different binary data in log and original cgi file.
left : console.log(Buf) / Right : Hexadecimal value in original file. ex) getRoute.cgi
Oddly, only certain values are output as 'FD'. Actual data is '8B', '8C' and so on.
cgi file fomat : binary / Little Endian.
Why certain data are replaced to 'FD'
Please answer for me.
thank you.
I am having problems with my code architecture and am looking for advice. I am sending multiple read request to a server via udp and printing the read response out. An example of how the request and response look are below. I am getting the response back as one large hex string starting at 0008.. I need a way for the code to know how many addresses were sent to be read and what data size was requested and take that into account on printing out the data. Since the data size can change, I can not just split the string up using a definite value. I am not looking for actual code but rather just some ideas on how one could tackle this.
Request
00 06 - Opcode
00 00 - Block #
02 - Count
34 97 00 20 - Address 1
04 00 - Data Size 1 (bytes)
30 97 00 20 - Address 2
01 00 - Data Size 2 (bytes)
Response- 00080001e60300009
00 08 - Opcode
00 01 - Block #
e6 03 00 00 - Data 1
09 - Data 2
What I am printing right now- e603000009
How I want it printed - Address 1 = e6030000
Address 2 = 09 ...
Address 3 = 00 00
etc.
(it would know what it is a new data by the data size that was requested and the # of addresses that were requested)
Part of code where I am sending a read request and emitting it to html
app.post('/output3', function(req, res){
res.sendFile(__dirname + '/upload3.html');
//Define the host and port values of UDP
var HOST= '192.168.0.136';
var PORT= 69;
var io = require('socket.io')(http);
//Mulitple parameters
//setInterval will send message constantly.
var client= dgram.createSocket('udp4');
var counter = 0;
//array for addresses
var address=[];
//array for size
var size=[];
//iterate through all addresses and convert to little endian
for (var i=0; i<req.body.address.length; i++) {
var n= req.body.address[i];
var s = n.toString(16).match(/.{1,2}/g);
address[i]= s.reverse().join("").toString(16); // ==> "0x985c0020" (= 2556166176)
}
//iterate through all size and make hex strings and little endian
for (var i=0; i<req.body.size.length; i++) {
function pad(number, length) {
var my_string = '' + number;
while (my_string.length < length) {
my_string = '0' + my_string;
}
return my_string;
}
var n2= pad(req.body.size[i], 4);
var s2 = n2.toString(16).match(/.{1,2}/g);
size[i]= s2.reverse().join("").toString(16);
}
//empty string to add address and size together
var x='';
for (var i=0; i<req.body.address.length; i++) {
x += address[i]+size[i];
}
console.log(req.body.size);
var mrq= read(x);
//Open listener to recieve intial message and print in to webpage
io.on('connection', function(socket){
var mrq= read(x);
io.emit('mrq', mrq);
});
function read() {
// Memory Read Request is a 16-bit word with a value of 6
var message = '0006'
// Block number is a 16-bit word that we will always set to 0
message += '0000'
// Count is variable, and calculated by the size of the parameter list
message += '02'
for (var i=0; i < arguments.length; i++) {
message += arguments[i];
}
return message;
}
var message = new Buffer(mrq, 'hex');
counter++;
var loop= setInterval(function () {
//Sends packets to UDP and setInterval sends packets again for specific time
client.send(message, 0, message.length, PORT, HOST, function (err, bytes) {
if (err) throw err;
});
if (counter === 1000000000000000) {
clearInterval(loop);
}
}, 1/50);
//Open listener to recieve intial message and print in to webpage
io.on('connection', function(socket){
client.on('message', function( message, rinfo ){
//hex string
var temp = message.readUIntBE(0, 2);
//console.log(message.toString(16));
io.emit('temp', temp);
});
});
Showing your current code would help us answer this better.
but in general, the way would be to use a write stream and push your string out in chunks, rather than as a whole block.