There is a code I found here https://github.com/substack/stream-handbook which reads 3 bytes from stream. And I do not understand how it works.
process.stdin.on('readable', function() {
var buf = process.stdin.read(3);
console.log(buf);
process.stdin.read(0);
});
Being called like this:
(echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume.js
It returns:
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
First of all, why do I need this .read(0) thing? Isn't stream has a buffer where the rest of data is stored until I request it by .read(size)? But without .read(0) it'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
Why?
The second is these sleep 1 instructions. If I call the script without it
(echo abc; echo def; echo ghi) | node consume.js
It'll print
<Buffer 61 62 63>
<Buffer 0a 64 65>
no matter will I use .read(0) or not. I don't understand this completely. What logic is used here to print such a result?
I am not sure about what exactly the author of https://github.com/substack/stream-handbook tried to show using the read(0) approach, but IMHO this is the correct approach:
process.stdin.on('readable', function () {
let buf;
// Every time when the stream becomes readable (it can happen many times),
// read all available data from it's internal buffer in chunks of any necessary size.
while (null !== (buf = process.stdin.read(3))) {
console.dir(buf);
}
});
You can change the chunk size, pass the input either with sleep or without it...
I happened to learn NodeJS stream module these days. Here are some comments inside Readable.prototype.read function:
// if we're doing read(0) to trigger a readable event, but we
// already have a bunch of data in the buffer, then just trigger
// the 'readable' event and move on.
It said that after called .read(0), stream would just trigger (using the process.nextTick) another readable event if stream was not ended.
function emitReadable(stream) {
// ...
process.nextTick(emitReadable_, stream);
// ...
}
Related
Im trying to implement a simple node.js stream multiplexer/demultiplexer.
Currently while implementing the multiplexing mechanism i noticed that the output of the multiplexer gets concatenated into a single chunk.
const { PassThrough, Transform } = require("stream");
class Mux extends PassThrough {
constructor(options) {
super(options);
}
input(id, options) {
let encode = new Transform({
transform(chunk, encoding, cb) {
let buf = Buffer.alloc(chunk.length + 1);
buf.writeUInt8(id, 0);
chunk.copy(buf, 1);
cb(null, buf);
},
...options
});
encode.pipe(this);
return encode;
};
};
const mux = new Mux();
mux.on("readable", () => {
console.log("mux >", mux.read())
});
const in1 = mux.input(1);
const in2 = mux.input(2);
in1.write(Buffer.alloc(3).fill(255));
in2.write(Buffer.alloc(3).fill(127));
Output looks like this: mux > <Buffer 01 ff ff ff 02 7f 7f 7f>.
I would have thought that i receive two console.log outputs.
Expected output:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Can some one explains why i only get one "readable" event and a concatenated chunk from both inputs?
Use the data event and read from the callback:
The 'data' event is emitted whenever the stream is relinquishing ownership of a chunk of data to a consumer.
mux.on("data", d => {
console.log("mux >", d)
});
This now yields:
mux > <Buffer 01 ff ff ff>
mux > <Buffer 02 7f 7f 7f>
Why readable is only emitted once is explained in the docs as well:
The 'readable' event will also be emitted once the end of the stream data has been reached but before the 'end' event is emitted.
data and readable behave differently. In your case readable is never emitted until the end of the stream data has been reached, which returns all the data at once. data is emitted on each available chunk.
I'm on MacOS, Node v12 and using a child process (exec/spawn/execSync/spawnSync) to execute a shell command which can return more than 8192 characters. However, back in my nodeJS method that invokes the child process, I only ever get up to 8192 characters and no more than that. (8192 seems to be the default pool size for a buffer).
I've tried increasing the maxBuffer size in the Options to anything larger than 8192 but it does not affect anything.
I've also tried running the same command with exec, spawn, execSync and spawnSync and they all behave the same way. Same result.
When I run:
exec(shellCommand, { encoding: "buffer", maxBuffer: 16384 }, (error, stdout, stderr) => {
console.log('stdout--> length: ', stdout.length, '<--->', stdout)
});
I get:
stdout--> length: 8192 <---> <Buffer 7b 22 72 65 73 75 6c 74 22 3a 5b 7b 22 70 72 6f 6a 65 63 74 4e 61 6d 65 22 3a 22 73 65 65 64 73 22 2c 22 74 65 6d 70 6c 61 74 65 4e 61 6d 65 22 3a 22 ... 8142 more bytes>
I know that the data coming back is larger than 8192 because when I run the shell command in a shell and check the length it is greater than 8192.
Also, and this is the puzzling bit, when I set the child process' stdio option to 'inherit' such as:
execSync(shellCommand, { encoding: "buffer", stdio:"inherit" });
(which says to use the parents stdout, in my case that is the NodeJS' console)
I see the full response back in the console where NodeJS is running.
I have also read a similar issue on github but it hasn't really helped.
How do I go about executing a shell command in NodeJS and getting the full response back?
try this :
const { spawn } = require('child_process');
const cmd = spawn('command', ['arg1', 'arg2']);
let bufferArray= []
/*cmd.stdout.setEncoding('utf8'); sets encdoing
defualt encoding is buffer
*/
cmd.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
bufferArray.push(data)
});
cmd.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
cmd.on('close', (code) => {
console.log(`child process exited with code ${code}`);
let dataBuffer = Buffer.concate(bufferArray];
console.log(dataBuffer.toString())
});
this could be useful: Node.js spawn child process and get terminal output live
Turns out that the shell command had a process.exit() statement that is being called before the stdout buffer is fully flushed.
So stdout will send 8192 chars and since it's asynchronous the process will go on to the next statement, one of them being process.exit() and it will kill the process before flushing out the rest of the stdout buffer.
TL;DR - exec/spawn works correctly, shell command exits before stdout fully flushed
The default buffer size for child_process.exec is 1MB, so try not passing a maxBuffer; however, it would be much better to use child_process.spawn so that you get the output as a stream.
I've got an Arduino sending very basic messages:
Serial.print('R');
Serial.println(1);
or
Serial.print('R');
Serial.println(2);
I'm trying to read each line using node.js and the SerialPort module but I get inconsistent results:
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52 31 0d 0a> R1
Data: <Buffer 52 32 0d 0a> R2
Data: <Buffer 52> R
Data: <Buffer 31 0d 0a> 1
Data: <Buffer 52 32 0d 0a> R2
And here's how I've tried to parse:
this.port = new SerialPort(portName, {
baudRate: baudRate,
autoOpen:false,
flowControl: false,
parser: new Readline("\r\n")
});
this.port.open(function (err) {
if (err) {
return console.log('Error opening port: ', err.message);
}
console.log("port open!");
});
this.port.on('error', function(err) {
console.log('Error: ', err.message);
})
this.port.on('open', function() {
console.log("open event called");
});
this.port.on('data', function (data) {
console.log('Data:', data,data.toString('utf8'));
});
In short: I'm expecting R1, R2 messages coming in consistently, not split up like this:
Data: <Buffer 52> R
Data: <Buffer 31 0d 0a> 1
I'm passing ("\r\n" / 0x0d 0x0a) to Readline. What am I missing ?
How can I get consistent new line parsing using SerialPort in node ?
I think that the solution to your problem requires to bind an event on the parser object, while you're currently listening it on the port object. Data that arrives trough the port is not always terminated by 0x0d 0x0a (*). Those two byte are a string terminator signal for the ReadLine parser only.
Thus, maybe you should write this listener in your code instead:
// I'm not actually sure where parser lives, I'm not
// in the condition of trying by myself...
this.port.parser.on('data', function (data) {
console.log('Data:', data,data.toString('utf8'));
});
Unfortunately, I don't have any suggestion to make the syntax more elegant, and for my standard this solution is more elegant than create a function that redirects bindings for you. It depends on your application though, and at the moment I don't have enough information to suggest a possible better solution.
(*) In the first (wrong) comment that I immediately deleted, I asked why you put both byte as termination to the line 0x0d 0x0a (\r\n), and not simply 0x0a (\n) but the Serial.println method actually writes both bytes by default.
I m new to this topic but still want to Know how to approach to this.
I want to build a system that uses messaging to perform crud operations on a nodejs Server.
I Know about Rest but i cant figure out how to translate it to messaging with rabbitmq.
edit:
I think i have to make my questin a bit more clear:
What i want to do is sending a message produced by my Java client using amqp and rabbitMQ to a node.js server. The message contains a JSON object.
Some of the data should be send into the database(mysql).
My code looks some kind like this(Java Producer):
JSONObject obj = new JSONObject();
obj.put("fuellstand", behaelter_1.getFuellstand());
obj.put("behaelter", behaelter_1.getId());
obj.put("timestamp", currentTimestamp);
//String message = behaelter_1.getFuellstand()+" "+ behaelter_1.getId()+" "+currentTimestamp;
String message = obj.toJSONString();
channel.basicPublish("", QUEUE_NAME, null, message.getBytes("UTF-8"));
//channel.basicPublish("",QUEUE_NAME , , arg3);
System.out.println(message+" "+message.getBytes("UTF-8"));
And thats how my nodejs server trys to consume it:
amqp.connect('amqp://localhost', function (err, conn) {
if (err) {
console.log("fehler mit dem amqp host!")
throw(err);
} else {
conn.createChannel(function (err, ch) {
if (err) {
console.log("failing to createChanel")
throw(err);
} else {
var q = 'alerts';
ch.assertQueue(q, {durable: false});
console.log(" [*] Waiting for something in %s. CTRL+C to end", q);
ch.consume(q, function (msg) {
console.log(msg);
}, {noAck: true});
}
});
}
});
the console returns the following:
{ fields: { consumerTag: 'amq.ctag-G3vsZRIGRZJT1qntZ1hTuw',
deliveryTag: 1,
redelivered: false,
exchange: '',
routingKey: 'alerts' },properties: {},content: <Buffer 7b 22 66 75 65 6c 6c 73 74 61 6e 64 22 3a 32 32 2c 22 62 65 68 61 65 6c 74 65 72 22 3a 31 2c 22 74 69 6d 65 73 74 61 6d 70 22 3a 32 30 31 36 2d 31 32 ... > }
my only problem at this point is to decode the json j build. I dont get why i cant decode the buffer. or am i getting something wrong?
As it turns out i had to use the following code to access the content of the message
msg.content.toString
now i only need to parse it into json to access the individual json attributes
RabbitMQ is not a database and does not support CRUD operations
The following is an excerpt from the stream-handbook by James Halliday (aka substack):
Here's an example of using .read(n) to buffer stdin into 3-byte
chunks:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
});
Running this example gives us incomplete data!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume1.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
This is because there is extra data left in internal buffers and we
need to give node a "kick" to tell it that we are interested in more
data past the 3 bytes that we've already read. A simple .read(0) will
do this:
process.stdin.on('readable', function () {
var buf = process.stdin.read(3);
console.dir(buf);
process.stdin.read(0);
});
Now our code works as expected in 3-byte chunks!
$ (echo abc; sleep 1; echo def; sleep 1; echo ghi) | node consume2.js
<Buffer 61 62 63>
<Buffer 0a 64 65>
<Buffer 66 0a 67>
<Buffer 68 69 0a>
When I change the example to 2-byte read chunks, it breaks - presumably because the internal buffer still has data queued up. But that wouldn't happen if read(0) kicked off a 'readable' event each time it was called. Looks like it only happens after all the input is finished.
process.stdin.on('readable', function () {
var buf = process.stdin.read(2);
console.dir(buf);
process.stdin.read(0);
});
What does this code do under the hood? It seems like read(0) queues another 'readable' event, but only at the end of input. I tried reading through the readable stream source, but it's pretty heavy-lifting. Does anyone know how this example works?