Communication between child processes in Node.js - node.js

I'm trying to make a program in node.js that creates two processes using the fork() method of childproccess. The processes are as follows:
Father.js
Son1.js
Son2.js
I want to transfer data between two child processes directly, not between father and children. I show you a graph of what I'm trying to do.
communication between child proccess
I tried with the following code, but it did not work for me.
In the father.js code, I 'm creating the childs processes as follows:
const cp = require("child_process");
var son1 = cp.fork("${__dirname}/son1.js");
var son2 = cp.fork("${__dirname}/son2.js");
console.log("father sending message to son1..");
son1.send({msg:'Hi son1',br:son2});
console.log("father sending message to son2..");
son2.send({msg:'Hi son1',br:son1});
The Son1.js's code:
var brother=null;
process.on('message', function(json)
{
console.log('message father in son1.js;', json.msg);
brother=json.br;
brother.send("hello I'm son1.js");
});
And the Son2.js 's code:
var brother=null;
process.on('message', function(json)
{
console.log('message father in son2.js;', json.msg);
brother=json.br;
brother.send("hello I'm son2.js");
});
How can I send and receive messages from son1.js to son2.js and vice versa without sending messages to father.js?

Here's what you can do:
Create an IPC server on the parent linked to a socket file.
Open a connection to the server (still within the parent), creating a socket pair.
Send the server's socket to one child, send the client's socket to the other child.
parent.js:
const net = require('net');
const cp = require('child_process');
let u_proc_1 = cp.fork(__dirname+'/child.js', ['#1']);
let u_proc_2 = cp.fork(__dirname+'/child.js', ['#2']);
// create IPC server on parent
let d_server = net.createServer((d_socket_2) => {
// send server socket to #2
u_proc_2.send('socket', d_socket_2);
});
// create socket file
let p_socket = __dirname+'/sibling.sock';
// bind server to socket file
d_server.listen(p_socket);
// create client socket; this also triggers creation of a server socket
let d_socket_1 = net.connect(p_socket, () => {
// have #1 send to #2
u_proc_1.send('hey!');
// have #2 send to #1
u_proc_2.send('hello');
});
// send client socket to #1
u_proc_1.send('socket', d_socket_1);
child.js:
const name = process.argv[2];
let d_socket_sibling;
process.on('message', (s_action, d_socket_msg) => {
// parent is sending a socket
if('socket' === s_action) {
console.log(name+' now has a socket');
// save socket to variable for later use
d_socket_sibling = d_socket_msg;
// receive data from sibling
d_socket_sibling.on('data', (s_data) => {
console.log(name+' received: '+s_data);
});
}
// otherwise, parent wants me to send message to sibling
else {
console.log(name+' is sending: '+s_action);
// send data to sibling
d_socket_sibling.write(s_action);
}
});
Output:
#2 now has a socket
#1 now has a socket
#2 is sending: hello
#1 is sending: hey!
#1 received: hello
#2 received: hey!
The result is a direct, two-way communication channel between the two child processes, existentially mediated by the parent. Keep in mind that you will have to clean up the socket file somehow (e.g., deleting it on startup or before closing the parent)

You would have to open up another communication channel, such as a local socket (tcp, udp, or unix) or a third-party service (e.g. redis).

For send data to child process can you use this
const fork = require('child_process').fork('./son1.js')
fork.send('message')
son1.js
process.on('message', message => {
console.log(message)
process.exit()
})

Related

How to share a TCP socket object between parent and (forked) child?

I have an application in which my TCP server module (parent) listens for 'connection' events and receives some data on the created socket to perform a handshake with the remote client. Once the handshake is performed, the server needs to send the socket object to a forked child, which will also send and receive data to the socket, do some stuff and finally send result to parent and be killed. For some reasons, I need to keep the socket object in the parent for further data processing not performed in the child, after the child has finished.
I've managed to send the socket to the child using the subprocess.send() method but, this way, the socket handle becomes null in the parent. I tried setting the keepOpen option to true and it almost worked, since I can send the socket and still work with it in the parent, but It seems not to work properly, because incoming data is not always received by the child 'data' event listener.
I also tried to removeListener for the 'data' event from the parent, prior to sending the socket to the child, but this made no difference, data is still being lost at some point on some occasions (on some others it is correctly received after an unexpected delay...). This code extract illustrates what I'm trying to do:
const net = require('net');
const server = net.createServer();
const cp = require('child_process');
server.on('connection', (socket) => {
socket.on('data', (data) => {
// Perform handhsake
const child = cp.fork('child.js');
child.on('message', (result) => {
console.log('CHILD finished processing: ', result);
child.kill('SIGHUP');
// Do more stuff with socket
});
child.send('socket', socket);
// (At this point, socket handle is null)
});
});
server.listen(PORT)
I'm new to nodejs, I assume there might be errors in the code. Thanks.

Node-Red: Create server and share input

I'm trying to create a new node for Node-Red. Basically it is a udp listening socket that shall be established via a config node and which shall pass all incoming messages to dedicated nodes for processing.
This is the basic what I have:
function udpServer(n) {
RED.nodes.createNode(this, n);
this.addr = n.host;
this.port = n.port;
var node = this;
var socket = dgram.createSocket('udp4');
socket.on('listening', function () {
var address = socket.address();
logInfo('UDP Server listening on ' + address.address + ":" + address.port);
});
socket.on('message', function (message, remote) {
var bb = new ByteBuffer.fromBinary(message,1,0);
var CoEdata = decodeCoE(bb);
if (CoEdata.type == 'digital') { //handle digital output
// pass to digital handling node
}
else if (CoEdata.type == 'analogue'){ //handle analogue output
// pass to analogue handling node
}
});
socket.on("error", function (err) {
logError("Socket error: " + err);
socket.close();
});
socket.bind({
address: node.addr,
port: node.port,
exclusive: true
});
node.on("close", function(done) {
socket.close();
});
}
RED.nodes.registerType("myServernode", udpServer);
For the processing node:
function ProcessAnalog(n) {
RED.nodes.createNode(this, n);
var node = this;
this.serverConfig = RED.nodes.getNode(this.server);
this.channel = n.channel;
// how do I get the server's message here?
}
RED.nodes.registerType("process-analogue-in", ProcessAnalog);
I can't figure out how to pass the messages that the socket receives to a variable number of processing nodes, i.e. multiple processing nodes shall share on server instance.
==== EDIT for more clarity =====
I want to develop a new set of nodes:
One Server Node:
Uses a config-node to create an UDP listening socket
Managing the socket connection (close events, error etc)
Receives data packages with one to many channels of different data
One to many processing nodes
The processing nodes shall share the same connection that the Server Node has established
The processing nodes shall handle the messages that the server is emitting
Possibly the Node-Red flow would use as many processing Nodes as there are channels in the server's data package
To quote the Node-Red documentation on config-nodes:
A common use of config nodes is to represent a shared connection to a
remote system. In that instance, the config node may also be
responsible for creating the connection and making it available to the
nodes that use the config node. In such cases, the config node should
also handle the close event to disconnect when the node is stopped.
As far as I understood this, I make the connection available via this.serverConfig = RED.nodes.getNode(this.server); but I cannot figure out how to pass data, which is received by this connection, to the node that is using this connection.
A node has no knowledge of what nodes it is connected to downstream.
The best you can do from the first node is to have 2 outputs and to send digital to one and analogue to the other.
You would do this by passing an array to the node.send() function.
E.g.
//this sends output to just the first output
node.sent([msg,null]);
//this sends output to just the second output
node.send([null,msg]);
Nodes that have receive messagess need to add a listener for input
e.g.
node.on('input', function(msg) {
...
});
All of this is well documented on the Node-RED page
The other option is if the udpServer node is a config node then you need to implement your own listeners, best bet is to look something like the MQTT nodes in core for examples of pooling connections

How to pass an active WebSocket to a clustered thread in Node.js?

In Node.js they expose a handy way to pass net.Sockets to child processes (cluster.Worker) via:
var socket; // some instance of net.Socket
var worker = process.fork();
worker.on("online", function() {
worker.send("socket", socket);
});
Which is super cool and works handily. But how would I do this with a WebSocket connection? I'm open to try any module.
Currently I've tried using various modules like ws. Most of them store the initial net.Socket HTTP Request and then upgrade it, but none seem simple enough to pass to the child process as a net.Socket because they need tons of handshake info needed by the WebSocket spec, so far as I can tell.
I know there are hackish solutions, like opening a WebSocket server on the child process on a unique port, then telling the WebScoket connection to reconnect on that port, but then I need an open port for every child thread. Or, piping all data to the WebSocket connection through process.send so the main thread does all the io, but that defeats some of the performance benefits by running stuff on multiple threads.
So does anyone have any ideas?
Welp I figured it out. ws may have been too much for my intended purposes. Instead I found a pretty obscure WebSocket library, lark-websocket which exposes a function that given a net.Socket can wrap it up in in their Client class and work with it as a WebSocket. The only issue was both the parent and child threads would then try to ping the connection on the other end so I had to fork it and add a way for the parent thread to pause pinging.
Here's some example code for anyone interested:
var cluster = require("cluster");
var ws = require('lark-websocket');
if(cluster.isMaster) { // make a child process and pipe all ws connections to it
var worker = cluster.fork();
worker.once("online", function() {
console.log("worker online with pid", worker.process.pid);
})
ws.createServer(function(client, request){
worker.send("socket", client._socket); // send all websocket clients to the worker thread
}).listen(27015);
}
else { // we are a worker, so we handle the ws connections
process.on("message", function(message, handler) {
if(message === "socket") { // Note: Node js can only send sockets via handler if message === "socket", because passing sockets between threads is sketchy as fuck
var client = ws.createClient(handler);
client.on('message',function(msg){
console.log("worker " + process.pid + " got:", msg);
client.send("I got your: " + msg);
});
}
});
}

NodeJs: Never emits "end" when reading a TCP Socket

I am pretty new to Node.Js and I'm using tcp sockets to communicate with a client. Since the received data is fragmented I noticed that it prints "ondata" to the console more than once. I need to be able to read all the data and concatenate it in order to implement the other functions. I read the following http://blog.nodejs.org/2012/12/20/streams2/ and thought I can use socket.on('end',...) for this purpose. But it never prints "end" to the console.
Here is my code:
Client.prototype.send = function send(req, cb) {
var self = this;
var buffer = protocol.encodeRequest(req);
var header = new Buffer(16);
var packet = Buffer.concat([ header, buffer ], 16 + buffer.length);
function cleanup() {
self.socket.removeListener('data', ondata);
self.socket.removeListener('error', onerror);
}
var body = '';
function ondata() {
var chunk = this.read() || '';
body += chunk;
console.log('ondata');
}
self.socket.on('readable', ondata);
self.socket.on('end', function() {
console.log('end');
});
function onerror(err) {
cleanup();
cb(err);
}
self.socket.on('error', onerror);
self.socket.write(packet);
};
The end event will handle the FIN package of the TCP protocol (in other words: will handle the close package)
Event: 'end'#
Emitted when the other end of the socket sends a FIN packet.
By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.
About FIN package: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination
The solution
I understand your problem, the network communication have some data transfer gaps and it split your message in some packages. You just want read your fully content.
For solve this problem i will recommend you create a protocol. Just send a number with the size of your message before and while the size of your concatenated message was less than total of your message size, keep concatenating :)
I have created a lib yesterday to simplify that issue: https://www.npmjs.com/package/node-easysocket
I hope it helps :)

Prevent sending data to stdin if spawn fails

In my Node.js (v0.10.9) code I'm trying to detect 2 cases:
an external tool (dot) is installed - in that case I want to send some data to stdin of created process
the external tool is not installed - in that case I want to display warning and I don't want to send anything to process' stdin
My problem is that I don't know how to send data to child's stdin if and only if the process was spawned successfully (i.e. stdin is ready for writing).
Following code works fine if dot is installed, but otherwise it tries to send data to the child although the child wasn't spawned.
var childProcess = require('child_process');
var child = childProcess.spawn('dot');
child.on('error', function (err) {
console.error('Failed to start child process: ' + err.message);
});
child.stdin.on('error', function(err) {
console.error('Working with child.stdin failed: ' + err.message);
});
// I want to execute following lines only if child process was spawned correctly
child.stdin.write('data');
child.stdin.end();
I'd need something like this
child.on('successful_spawn', function () {
child.stdin.write('data');
child.stdin.end();
});
From the node.js docs: http://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options
Example of checking for failed exec:
var spawn = require('child_process').spawn,
child = spawn('bad_command');
child.stderr.setEncoding('utf8');
child.stderr.on('data', function (data) {
if (/^execvp\(\)/.test(data)) {
console.log('Failed to start child process.');
}
});
Have a look at core-worker:
https://www.npmjs.com/package/core-worker
This package makes it a lot easier to handle processes.
I think what you want to do is something like that (from the docs):
import { process } from "core-worker";
const simpleChat = process("node chat.js", "Chat ready");
setTimeout(() => simpleChat.kill(), 360000); // wait an hour and close the chat
simpleChat.ready(500)
.then(console.log.bind(console, "You are now able to send messages."))
.then(::simpleChat.death)
.then(console.log.bind(console, "Chat closed"))
.catch(() => /* handle err */);
So if the process is not started correctly, none of the .then statements are executed which is exactly what you want to do, right?

Resources