Best way to manage unique child processes in node.js - node.js

I'm about to start coding a chat bot. However, I plan on running more than one, using a wrapper to communicate and restart them. I have done this in the past with child_process.fork(), but it was incredibly inefficient. I've looked into spawn and cluster as well, but they all seem to focus on running the same thing, not unique bots. As for plugins, I've looked into fleet, forkfriend, and workerfarm, but none seem to fit my needs.
Is there any plugin or way I'm not seeing to help me do this? Or am I just going to have o wing it again?

You can have as many chat bots as you wish in a single process. The rule of thumb in Node.js is using one process per processor core since Node has slightly different multithreading model you might got used to.
Assuming you still need some multithreading on top of this, here is a couple of node modules you might find fitting your needs:
node-webworker-threads, dnode.
UPDATE:
Now I see what you need. There is a nice example in Node.js docs, which I saw recently. I just copy & paste it here:
var normal = require('child_process').fork('child.js', ['normal']);
var special = require('child_process').fork('child.js', ['special']);
// Open up the server and send sockets to child
var server = require('net').createServer();
server.on('connection', function (socket) {
// if this is a VIP
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// just the usual dudes
normal.send('socket', socket);
});
server.listen(1337);
child.js looks like this:
process.on('message', function(m, socket) {
if (m === 'socket') {
socket.end('You were handled as a ' + process.argv[2] + ' person');
}
});
I believe it's pretty much what you need. Launch several processes with different configs (if number of configs is relatively low) and pass socket to a particular one from master process.

Related

Nodejs global hotkey execution

I was wondering if anyone could point me in the right direction. I'm building a Node app that I want to execute some hotkeys on the computer it's running on to start & stop an OBS stream based on hotkeys.
I was wondering if this is possible as I've only been able to find out of date and non-working solutions.
Thanks.
You can do it easily in AutoHotKey, but if it is Node you need, Node you'll get.
Probably quite a few Node Package Managers (NPM's) that will fit the bill, if you check github, I'm betting someone has made a little something something.
Lo and behold, I did it for you : hott - Global hotkeys for Windows, with node
Seems a tad overkill to me, using "iohook" should work wonders; hook it up in the semi-old fashion JS way of the event, something like so :
The only way I am fairly certain will work is plain and simple event listening :
const ioHook = require('iohook');
ioHook.on("keypress", event => {
if(event.keychar == 'a') {
console.log(event);
} else {
console.log("Press a");
}
});
ioHook.start();

Best practices to handle "500" errors in Nodejs

Im creating a TCP server in NodeJS as follows:
net = require('net');
var clients = [];
// Start a TCP Server
net.createServer(function (socket) {
socket.name = socket.remoteAddress + ":" + socket.remotePort
clients.push(socket);
socket.write("Welcome " + socket.name + "\n");
broadcast(socket.name + " joined the chat\n", socket);
socket.on('data', function (data) {
broadcast(socket.name + "> " + data, socket);
});
socket.on('end', function () {
clients.splice(clients.indexOf(socket), 1);
broadcast(socket.name + " left the chat.\n");
});
function broadcast(message, sender) {
clients.forEach(function (client) {
if (client === sender) return;
client.write(message);
});
}
}).listen(5000);
Coming from architectures like php+nginx, no matter how I mess up the code, there is no way I can crash my Nginx server (in most cases), worst case scenario one of my user gets a 500 error page and life continues. But in NodeJS, if I for example forgot to check that the denominator send by the user must be more than zero and then I try to do something like something/0 the whole server is going to crash, since I'm actually creating the server plus the app and not only the app like in php. What are the best practices to write effectively in NodeJS to mitigate the posibilites of crashing your server? Just an ugly big try/catch thats wraps the whole code?
I personally found the following article by joyent quite illuminating:
https://www.joyent.com/developers/node/design/errors
You should read it!
The author distinguishes between operational errors and programmer errors.
Operational errors are "run-time problems experienced by correctly-written programs." These are mostly things going wrong in external 'stuff', including users. Any robust program should try to deal with these problems as well as possible through proper error-handling. In your case, your program should check for valid user input, including value-ranges.
Programmer errors "are bugs in the program". The author argues that the program shouldn't even try to recover from bugs; If you would've anticipated the bug, the bug wouldn't be there in the first place, so how can you expect to write code to correct a situation that you didn't anticipate in the first place? If there is a situation that the programmer didn't anticipate (correctly) which leads to problems, just crash. This is less risky than continuing to run your software, which might now be in an undefined state.
Since you don't want downtime, this also means that you should run your software inside a 'restarter', something that will restart your software once it crashes. For this, I've used pm2 in the past, which works well imho.
Simply don't allow your server to ever crash. Or, put another way: write fault-tolerant code.
If you have code like this on your server:
function calculateValue(a, b) {
return a / b;
}
... then don't let b ever be zero. Safe-guard it by either validating the inputs (e.g., b !== 0 and spit back an HTTP 400 if it's false) or by defaulting (e.g., b = b <= 0 ? 1 : b).
Then, and this is the most important part: TEST YOUR CODE. Better yet, test your code first (classic test-driven development) by writing every kind of "happy path" and "edge case" tests you can think of. That will force you to write high quality, stable, predictable code that greatly lessens the possibility of application-crashing bugs.

How can I simulate latency in Socket.io?

Currently, I'm testing my Node.js, Socket.io server on localhost and on devices connected to my router.
For testing purposes, I would like to simulate a delay in sending messages, so I know what it'll be like for users around the world.
Is there any effective way of doing this?
If it's the messages you send from the server that you want to delay, you can override the .emit() method on each new connection with one that adds a short delay. Here's one way of doing that on the server:
io.on('connection', function(socket) {
console.log("socket connected: ", socket.id);
// override the .emit() method
const emitFn = socket.emit
socket.emit = (...args) => setTimeout(() => {
emitFn.apply(socket, args)
}, 1000)
// rest of your connection handler here
});
Note, there is one caveat with this. If you pass an object or an array as the data for socket.emit(), you will see that this code does not make a copy of that data so the data will not be actually used until the data is sent (1 second from now). So, if the code doing the sending actually modifies that data before it is sent one second from now, that would likely create a problem. This could be fixed by making a copy of the incoming data, but I did not add that complexity here as it would not always be needed since it depends upon how the caller's code works.
An old but still popular question. :)
You can use either "iptables" or "tc" to simulate delays/dropped-packets. See the man page for "iptables" and look for 'statistic'. I suggest you make sure to specify the port or your ssh session will get affected.
Here are some good examples for "tc":
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

How to detect a REQ connect event to ROUTER

What I want?
Whenever the REQ connects with requester.connect() call to that host:port, ROUTER should detect this event and do something i.e. in this case cluster.fork();
I tried:
//Start Evented Child Node Processes alongwith a Request REQ Each
router.on("accept", function funcCB (fileDesc, endPt) {
//fire up a cluster fork for handling Requests
console.log("starting a new FORK process");
cluster.fork();
});
What router.on(<event>) should actually detect a
requester.connect()
??
As per this here these are the monitor events available for node zmq:
These are the triggered events as per the doc there:
connect - ZMQ_EVENT_CONNECTED
connect_delay - ZMQ_EVENT_CONNECT_DELAYED
connect_retry - ZMQ_EVENT_CONNECT_RETRIED
listen - ZMQ_EVENT_LISTENING
bind_error - ZMQ_EVENT_BIND_FAILED
accept - ZMQ_EVENT_ACCEPTED
accept_error - ZMQ_EVENT_ACCEPT_FAILED
close - ZMQ_EVENT_CLOSED
close_error - ZMQ_EVENT_CLOSE_FAILED
disconnect - ZMQ_EVENT_DISCONNECTED
How it works?
The beauty of ZeroMQ is in it's architecture. This means, the abstract scaleable archetype primitives ( PUB / SUB, PAIR, XREQ ) do exactly what these have been defined for.
The clean architecture separates I/O-thread(s) from socket's-entry-gate "behaviour" and keeps all the dirty stuff down there::
This said, it is of no use say I want ROUTER to detect and handle also this and that, if it was not defined to do this right in the ZeroMQ architecture.
How to do it?
The simplest approach to this and similar need is to design one's own composite element, let's for the simplicity sketch it as [[ROUTER]+[SUB]], where the node has both the [ROUTER]-trafic-oriented behaviour and also keeps a [SUB]-signalling-receiving behaviour, exposed to outer world via SUB.bind() at another host:portSIG
This way, remote processes REQ.connect( host:port) and PUB.connect( host:portSIG ) and operate on both the transport-plane and the signalling-plane as your design needs and implements.
ZeroMQ is a lovely can-do LEGO-toolbox.
Enjoy these powers.
So the answer was this small addition as posted by #Jason here on the SO comments section
//start the socket ROUTER monitor
router.monitor(500, 0);

Webworker-threads: is it OK to use "require" inside worker?

(Using Sails.js)
I am testing webworker-threads ( https://www.npmjs.com/package/webworker-threads ) for long running processes on Node and the following example looks good:
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
this.onmessage = function (event) {
try{
postMessage(fibo(event.data));
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
But as soon as I add any code to query Mongodb, it throws an exception:
(not using the Sails model in the query, just to make sure the code could run on its own -- db has no password)
var Worker = require('webworker-threads').Worker;
var fibo = new Worker(function() {
function fibo (n) {
return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}
// MY DB TEST -- THIS WORKS FINE OUTSIDE THE WORKER
function callDb(event){
var db = require('monk')('localhost/mydb');
var users = db.get('users');
users.find({ "firstName" : "John"}, function (err, docs){
console.log(("serviceSuccess"));
return fibo(event.data);
});
}
this.onmessage = function (event) {
try{
postMessage(callDb(event.data)); // calling db function now
}catch (e){
console.log(e);
}
}
});
fibo.onmessage = function (event) {
//my return callback
};
fibo.postMessage(40);
Since the DB code works perfectly fine outside the Worker, I think it has something to do with the require. I've tried something that also works outside the Worker, like
var moment = require("moment");
var deadline = moment().add(30, "s");
And the code also throws an exception. Unfortunately, console.log only shows this for all types of errors:
{Object}
{/Object}
So, the questions are: is there any restriction or guideline for using require inside a Worker? What could I be doing wrong here?
UPDATE
it seems Threads will not allow external modules
https://github.com/xk/node-threads-a-gogo/issues/22
TL:DR I think that if you need to require, you should use a node's
cluster or child process. If you want to offload some cpu busy work,
you should use tagg and the load function to grab any helpers you
need.
Upon reading this thread, I see that this question is similar to this one:
Load Nodejs Module into A Web Worker
To which Audreyt, the webworker-threads author answered:
author of webworker-threads here. Thank you for using the module!
There is a default native_fs_ object with the readFileSync you can use
to read files.
Beyond that, I've mostly relied on onejs to compile all required
modules in package.json into a single JS file for importScripts to
use, just like one would do when deploying to a client-side web worker
environment. (There are also many alternatives to onejs -- browserify,
etc.)
Hope this helps!
So it seems importScripts is the way to go. But at this point, it might be too hacky for what I want to do, so probably KUE is a more mature solution.
I'm a collaborator on the node-webworker-threads project.
You can't require in node-webworker-threads
You are correct in your update: node-webworker-threads does not (currently) support requireing external modules.
It has limited support for some of the built-ins, including file system calls and a version of console.log. As you've found, the version of console.log implemented in node-webworker-threads is not identical to the built-in console.log in Node.js; it does not, for example, automatically make nice string representations of the components of an Object.
In some cases you can use external modules, as outlined by audreyt in her response. Clearly this is not ideal, and I view the incomplete require as the primary "dealbreaker" of node-webworker-threads. I'm hoping to work on it this summer.
When to use node-webworker-threads
node-webworker-threads allows you to code against the WebWorker API and run the same code in the client (browser) and the server (Node.js). This is why you would use node-webworker-threads over node-threads-a-gogo.
node-webworker-threads is great if you want the most lightweight possible JavaScript-based workers, to do something CPU-bound. Examples: prime numbers, Fibonacci, a Monte Carlo simulation, offloading built-in but potentially-expensive operations like regular expression matching.
When not to use node-webworker-threads
node-webworker-threads emphasizes portability over convenience. For a Node.js-only solution, this means that node-webworker-threads is not the way to go.
If you're willing to compromise on full-stack portability, there are two ways to go: speed and convenience.
For speed, try a C++ add-on. Use NaN. I recommend Scott Frees's C++ and Node.js Integration book to learn how to do this, it'll save you a lot of time. You'll pay for it in needing to brush up on your C++ skills, and if you want to work with MongoDB then this probably isn't a good idea.
For convenience, use a Child Process-based worker pool like fork-pool. In this case, each worker is a full-fledged Node.js instance. You can then require to your heart's content. You'll pay for it in a larger application footprint and in higher communication costs compared to node-webworker-threads or a C++ add-on.

Resources