I'm working with an Enfora MT4000 device. The device sends data to tcp or udp server when certain event has occurred. Data can be send in binary or ASCII format, but I need to use binary.
Enfora device is configured with AT commands like this:
AT$EVENT=14,0,7,1,1
AT$EVENT=14,3,52,14,1578098
When I configure the device with ASCII, the server receives data in this format:
r 13 0 0 $GPRMC,211533.00,A,3321.856934,S,07040.240234,W,0.0,0.0,120514,2.3,W,A*2B
But, when I use binary, the data looks like this:
$2K� �Dk����a�H
Anyone knows how Node.js can convert binary data from a socket? I'm trying to do this with a very simple script.
// server
require('net').createServer(function (socket) {
console.log("connected");
socket.setEncoding(null);
socket.on('data', function (data) {
console.log(data.toString());
});
})
.listen(3041);
thanks.
The data argument in your 'data' event handler is already a Buffer. By calling data.toString() you are converting that Buffer to a (UTF-8 by default) string. You're probably better off keeping it as a Buffer and using that if you need the original binary data.
Related
I am sending data from a Google Chrome Extension to a native application developed in C#, however, this works only when I send short text messages but this does not when data is binary (JSON encoded).
For example, to send a command, I am calling, from the extension:
port.postMessage({ command: 'Rates', info: request.setRates });
where port is the native messaging application connection and request.setRates is {setRates: "framesPerSecond=15&audioBitrate=22050"}
That is received perfectly through STDIN in the C# application.
However, if I call this instruction:
port.postMessage(request.binaryStream);
Where request.binaryStream is {"data":[26,69,223,163,163,66,134,129,1,66,247,129…122,235,208,2,56,64,163],"contentType":"x-media"}.
That is badly received in the native application. Data length is about 77 KB and in that case, I received things like: 0,215,214,171,175,125,107,235,95,94,250,215,215,190,181,........, and of course it is an invalid JSON string. It seems that some kind of buffer overrun is being produced.
How can this be done?
EDIT:
Last attempt at the moment is to base64 encode the array:
mediaRecorder.ondataavailable = function (e) {
e.data.arrayBuffer().then(buffer => {
chrome.runtime.sendMessage(appId, { binaryStream: Uint8ToBase64(new Uint8Array(buffer)) }, function (response) {
console.log(response);
});
stopStream();
});
With that, this is sent to the native application (one immediately after the other):
{command: "Binary", data: "GkXfo6NChoEBQveBAULygQRC84EIQoKIbWF0cm9za2FCh4EEQo…ddddddddddddddddddddddddddddddddddddddddddddeow=="}
{command: "Binary", data: "QgaBAACA+4O2l3/8ZtVmH2JXfcZSfs+ulepr+aF2U5d+kW0SDu…fgBgDv16uSH4AY6Q9YA4dcvrl9cvrl9cvrl9coHrr0AI4QA=="}
And this is received in the native application:
^ {"command":"Binary","data":"QgaBAACA+4O2l3/8ZtVmH2JXfcZSfs+ulepr+aF2U5d+kW0SDuRqP9n9baILWx2vK/6vraUaEqNo9Tf7htznm8o72wjRTzgjZFyfSf+k4BZDp9luH6Un1JWAhbNem.........ddddddddddddddddddddddddddddddddddddddddddeow=="}
Notice that the received data is the second sent data, but the end of that received data is the end of the first sent data. So, the buffer overrun maybe is correct. Any solution to this? I had this same program using TCP Sockets and it works, but now, I need to use Native Messaging. Is the STDIN buffer very small?
Jaime
I have multiple Node.js servers running on my backend. One is an API server, which can accept image files from a 3rd party. The image files are streamed to the stdin of an ImageMagick process, and then the stdout of the ImageMagick process is streamed to a Node.js TCP server where the file is ultimately saved locally. The TCP server then needs to send a response to the API server after the file is successfully saved, so I need some way for the TCP server to know when it has the entire file (i.e. I can't simply close the socket on the API server after the file is sent).
One solution I could use is to save the stdout of the ImageMagick process to a temporary file on the API server, so I can get the length of the full file before I send it, and embed it in the beginning of the stream. Writing to disk throws a bit of a knot in the system though.
Is it acceptable to write a temp file to disk for the purpose of getting the length of the file, or is there a better / more efficient way to solve this problem?
Thanks
In case anybody else with a similar problem stumbles upon this, I found an alternative solution, which avoids the need to specify the file size altogether.
Create the TCP server using the {allowHalfOpen: true} option. The client can send the file to the TCP server, and then simply call the "end" method on the socket to signify that no more data will be written by the client (the socket is still readable by the client). With the "allowHalfOpen" option set on the server, the server can simply listen for the "end" event (which signifies that all of the data has been received), with the writable side of the server still open (allowing the server to send a response back to the client).
Note that the "allowHalfOpen" option defaults to false. If this option isn't set, the server automatically closes the socket when the writable side of the client is closed.
e.g.
SERVER
const fs = require('fs');
const net = require('net');
const server = net.createServer({allowHalfOpen: true});
server.on('connection', (socket) => {
const writeStream = fs.createWriteStream('myFile.png');
socket.on('data', (chunk) => {
writeStream.write(chunk);
});
socket.on('end', () => {
writeStream.end();
socket.end('File Written To Disk'); // send response and close socket
});
});
server.listen(8000);
CLIENT
const fs = require('fs');
const net = require('net');
const socket = net.connect(8000);
fs.createReadStream('myFile.png').pipe(socket);
socket.on('data', (chunk) => {
// response from TCP server
console.log(chunk.toString()); // File Written To Disk
});
socket.on('end', () => {
// server closed socket
});
I do not know if this is relevant. But ImageMagick miff: format is a streaming format. So you can write the dimensions of the image to a text file while streaming it out again via miff:.
convert rose: miff:- | convert - -format "%wx%h" -write info:tmp.txt miff:-
Here I use convert rose: miff:- to simulate your input stream.
Then I pipe it to the next convert, which reads the input stream, writes the WxH information to a tmp.txt text file, which you could access subsequently. The second convert also creates a miff:- output stream.
You could use NetPBM format in place of miff, since it also is a streaming format.
I've a node.js (tcp) server, i.e.
net.createServer
And I've a node.js client. I've created a module.export with a method
this.connect = function() {
var client = new net.Socket();
client.setTimeout(5000);
client.connect(settings.collectorport, settings.collectorhost, function() {
console.log('connected');
});
....
And another method, like
this.write = function(data, client) {
client.write(some-string);
}
From another node.js file, I invoke such methods from inside a loop like:
while (something) {
agent.write(data,client);
}
What happens is that sometimes the server receives, or the client send, two "some-string" all togheter.
I mean, if in the server I log what the server is receiving, I see, sometimes only one "message", sometimes two or three "messages" merged.
What can it be?
Sorry for my lexicon...
Assuming the number of messages you are sending matches the number of messages you are receiving/expecting on the other end, then I think what you're describing is basically how TCP works.
TCP is a stream and so you can't/shouldn't make any assumptions about "message boundaries." You could receive one byte or 10 kilobytes at a time, it just depends on the networking stack and some other factors, so you should be prepared to handle any amount of data that comes in (e.g. create a protocol, perhaps using a delimiter character, to sort out individual "messages").
I want read DNS packet catch by UDP server event on.message
How can I read fix size data
packet data manage like
I want read all field of DNS packet by separately by size.
--Read nodejs buffer object bit by bit. (specific size of bit)
var s = dgram.createSocket('udp4');
s.bind(53, function() {
});
s.on('message',function(msg,rinfo){
console.log("Length = "+msg.length);
console.log(msg.toString('binary'));
console.log(msg);
console.log("-----------------------------------------------------------------------");
});
how can achieve every field data?
with buffer of on.message param.
Thanks.
Please check https://www.npmjs.org/package/native-dns-packet
It provides exactly what you need:
Packet.parse(buffer) returns an instance of Packet
Packet.write(buffer, packet) writes the given packet into the buffer, truncating where appropriate
I have some code which I can't seem to fix. It looks as follows:
var childProcess = require('child_process');
var spawn = childProcess.spawn;
child = spawn('./simulator',[]);
child.stdout.on('data',
function(data){
console.log(data);
}
);
This is all at the backend of my web application which is running a specific type of simulation. The simulator executable is a c program which runs a loop waiting to be passed data (via its standard input) When the inputs come in for the simulation (ie from the client), I parse the input, and then write data to the child process stdin as follows:
child.stdin.write(INPUTS);
Now the data coming back is 40,000 bytes give or take. But the data seems to be getting broken into chunks of 8192 bytes. I've tried fixing the standard output buffer of the c program but it doesnt fix it. I'm wondering if there is a limit to the size of the 'data' event that is imposed by node.js? I need it to come back as one chunk.
The buffer chunk sizes are applied in node. Nothing you do outside of node will solve the problem. There is no way to get what you want from node without a little extra work in your messaging protocol. Any message larger than the chunk size will be chunked. There are two ways you can handle this issue.
If you know the total output size before you start to stream out of C, prepend the message length to the data so the node process knows how many chunks to pull before terminating the entire message.
Determine a special character you can append to the message you are sending from the C program. When node sees that character, you end the input from that message.
If you are dealing with IO in a web application you really want to stick with the async methods. You need something like the following (untested). There is a good sample of how to consume the Stream API in the docs
var data = '';
child.stdout.on('data',
function(chunk){
data += chunk;
}
);
child.stdout.on('end',
function(){
// do something with var data
}
);
I ran into the same problem. I tried many different things and was starting to get annoyed. I tried prepending and appending with special characters. Maybe I was stupid but I just couldn't get it right.
I ran into a module called linerstream which basically parses every chunk until it sees an EOF. You can use it like this:
process.stdout.pipe(new Linerstream()).on('data', (data) => {
// data here is complete and not chunked
});
The important part is that you do have to write data to stdout with a line that ends with EOF. Otherwise it doesn't know it is the end.
I can say this worked me. Hopefully it helps other people.
ppejovic's solution works, but I prefer concat-stream.
var concat = require('concat-stream');
child.stdout.pipe(concat(function(data) {
// all your data ready to be used.
});
There are a number of good stream helpers worth looking into based on your problem area. Take a look at substack's stream-handbook.