Is it possible to use electron APIs from within a NodeJS worker thread spawned from the main Electron process? We have attempted to use Electron's nativeImage module from within a worker thread (Node worker, not web worker) yet nativeImage is always undefined. Our worker:
const { parentPort } = require("worker_threads");
const { nativeImage } = require("electron");
parentPort.on("message", message => {
if (nativeImage === undefined) {
parentPort.postMessage("nativeImage is undefined!");
}
let image = nativeImage.createEmpty(); // fails
});
The main thread which creates this worker receives the "nativeImage is undefined!" message.
Related
I am running node js and using cluster module. In child process I am creating a websocket server and on websocket connection I am doing console.log(process.pid) on each websocket connection. I even added a loop in worker thread to slow it down which apparently is the case but it is still assigning the same core to each web socket client. I have written a bash script to run my html file which opens concurrent N connections to test if cluster module is working fine. Is this the issue with cluster module or websocket server?
const os = require('os');
const cluster = require('cluster');
if (cluster.isMaster) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork();
}
} else {
const server = require('http').createServer(app);
const WebSocket = require('ws');
let ws_clients = {};
const wss = new WebSocket.Server({ server: server});
wss.on('connection', function connection(ws) {
let h = 0;
for (let i = 0; i < 2e10; i++) {
h++;
}
console.log('handled by:', process.pid);
});
}
Your loop just pauses the process main thread that already accepted the connection and currently handles it. If I'm not mistaken connection requests under the hood and by node.js design are handled by separate threads.
If you wish to check the distribution of load on your local machine spawn 4 child processes and have 4 other child processes requesting connections. Then if you check core utilization you will see all of your cores working fine, assuming you have a 8 logical core machine (In windows -> task manager advanced view -> processor -> logical cores). You 'll see a bunch of process pids being logged.
I'm making a server script and, to make it easier for both hosts and clients to do what they want, I made a customizable server script that runs using nw.js(with a visual interface). Said script was made using web workers since nw.js was having problems with support to worker threads.
Now that NW.js fixed their problems with worker threads, I've been trying to move all the things that were inside the web workers to worker threads, but there's a problem: When the main thread receives the answer from the second thread, the later stops responding to any subsequent message.
For example, running the following code with either NW.js or Node.js itself will return "pong" only once
const { Worker } = require('worker_threads');
const worker = new Worker('const { parentPort } = require("worker_threads");parentPort.once("message",message => parentPort.postMessage({ pong: message })); ', { eval: true });
worker.on('message', message => console.log(message));
worker.postMessage('ping');
worker.postMessage('ping');
How do I configure the worker so it will keep responding to whatever message it receives after the first one?
Because you use EventEmitter.once() method. According to the documentation this method does the next:
Adds a one-time listener function for the event named eventName. The
next time eventName is triggered, this listener is removed and then
invoked.
If you need your worker to process more than one event then use EventEmitter.on()
const worker = new Worker('const { parentPort } = require("worker_threads");' +
'parentPort.on("message",message => parentPort.postMessage({ pong: message }));',
{ eval: true });
I have a architecture with a express.js webserver that accepts new tasks over a REST API.
Furthermore, I have must have another process that creates and supervises many other tasks on other servers (distributed system). This process should be running in the background and runs for a very long time (months, years).
Now the questions is:
1)
Should I create one single Node.js app with a task queue such as bull.js/Redis or Celery/Redis that basically launches this long running task once in the beginning.
Or
2)
Should I have two processes, one for the REST API and another daemon processes that schedules and manages the tasks in the distributed system?
I heavily lean towards solution 2).
Drawn:
I am facing the same problem now. as we know nodejs run in single thread. but we can create workers for parallel or handle functions that take some time that we don't want to affect our main server. fortunately nodejs support multi-threading.
take a look at this example:
const worker = require('worker_threads');
const {
Worker, isMainThread, parentPort, workerData
} = require('worker_threads');
if (isMainThread) {
module.exports = function parseJSAsync(script) {
return new Promise((resolve, reject) => {
const worker = new Worker(__filename, {
workerData: script
});
worker.on('message', resolve);
worker.on('error', reject);
worker.on('exit', (code) => {
if (code !== 0)
reject(new Error(`Worker stopped with exit code ${code}`));
});
});
};
} else {
const { parse } = require('some-js-parsing-library');
const script = workerData;
parentPort.postMessage(parse(script));
}
https://nodejs.org/api/worker_threads.html
search some articles about multi-threading in nodejs. but remember one here , the state cannot be shared with threads. you can use some message-broker like kafka, rabbitmq(my recommended), redis for handling such needs.
kafka is quite difficult to configure in production.
rabbitmq is good because you can store messages, queues and .., in local storage too. but personally I could not find any proper solution for load balancing these threads . maybe this is not your answer, but I hope you get some clue here.
I am have multiple micro services written in Nodejs Koa running in Docker Swarm.
Since container orchestration tools like Kubernetes or Swarm can scale up and down services instantly, I have a question on gracefully shutting down Nodejs service to prevent unfinished running process.
Below is the flow I can think of:
sending a SIGNINT signal to each worker process, Does docker swarm
send SIGNINT to worker when scaling down service?
the worker are responsible to catch the signal, cleanup or free any
used resource and finish the its process, How can I stop new api
request, wait for any running process to finish before shutting
down?
Some code below from reference:
process.on('SIGINT', () => {
const cleanUp = () => {
// How can I clean resources like DB connections using Sequelizer
}
server.close(() => {
cleanUp()
process.exit()
})
// Force close server after 5secs, `Should I do this?`
setTimeout((e) => {
cleanUp()
process.exit(1)
}, 5000)
})
I created a library (https://github.com/sebhildebrandt/http-graceful-shutdown) that can handle graceful shutdowns as you described. Works well with Express and Koa.
This package also allows you to create function (should return a promise) to additionally clean up things like DB stuff, ... here some example code how to use it:
const gracefulShutdown = require('http-graceful-shutdown');
...
server = app.listen(...);
...
// your personal cleanup function - this one takes one second to complete
function cleanup() {
return new Promise((resolve) => {
console.log('... in cleanup')
setTimeout(function() {
console.log('... cleanup finished');
resolve();
}, 1000)
});
}
// this enables the graceful shutdown with advanced options
gracefulShutdown(server,
{
signals: 'SIGINT SIGTERM',
timeout: 30000,
development: false,
onShutdown: cleanup,
finally: function() {
console.log('Server gracefulls shutted down.....')
}
}
);
I personally would increase the final timeout from 5 secs to higher value (10-30 secs). Hope that helps.
So I have a fairly simple setup on Heroku. I'm using RabbitMQ for handling background jobs. My setup consists of a node script that runs daily using Heroku Scheduler addon. The scripts adds jobs to the queue, the worker in turn, consumes them and delegates them onto a separate module for handling.
The problem starts after I receive a SIGTERM event that Heroku initiates randomly from time to time, before restarting the instance.
For some reason, after the instance is restarted, the worker is never get back up again. Only when I restart it manually by doing heroku ps:scale worker=0 and heroku ps:scale worker=1 The worker continues to consume the pending jobs.
Here's my worker:
// worker.js
var throng = require('throng');
var jackrabbit = require('jackrabbit');
var logger = require('logfmt');
var syncService = require('./syncService');
var start = function () {
var queue = jackrabbit(process.env.RABBITMQ_BIGWIG_RX_URL || 'amqp://localhost');
logger.log({type: 'msg', msg: 'start', service: 'worker'});
queue
.default()
.on('drain', onDrain)
.queue({name: 'syncUsers'})
.consume(onMessage)
function onMessage(data, ack, nack) {
var promise;
switch (data.type) {
case 'updateUser':
promise = syncService.updateUser(data.target, data.source);
break;
case 'createUser':
promise = syncService.createUser(data.source);
break;
case 'deleteUser':
promise = syncService.deleteUser(data.target);
}
promise.then(ack, nack);
}
function onDrain() {
queue.close();
logger.log({type: 'info', msg: 'sync complete', service: 'worker'});
}
process.on('SIGTERM', shutdown);
function shutdown() {
logger.log({type: 'info', msg: 'shutting down'});
queue.close();
process.exit();
}
};
throng({
workers: 1,
lifetime: Infinity,
grace: 4000
}, start);
The close() method on the jackrabbit object takes a callback, you should avoid exiting the process until that is finished:
function shutdown() {
logger.log({type: 'info', msg: 'shutting down'});
queue.close(function (e) {
process.exit(e ? 1 : 0);
});
}