I'm trying to write in Node (currently using version 18) a web server where one of the routes can be computation heavy so it will run for a long time and max out its CPU core with 100%.
To keep every other request responsive I thought that a Node cluster would be a great fit.
Extremely simplifying the code I ended up with:
import cluster from 'cluster';
import { createServer } from 'http';
const startUpTime = Date.now();
function requestListener(request, response) {
console.log(`[${(Date.now()-startUpTime)/1000}] pid ${process.pid} is serving URL ${request.url}`);
response.writeHead(200);
if (request.url === '/slow') {
for (let cnt = 0; cnt < 100; cnt++) {
for (let i = 0; i < 1000000000; i++ ) { /* just wait */ }
response.write(`${cnt}.`);
}
}
response.end(`... pid ${process.pid} done...\n`);
console.log(`[${(Date.now()-startUpTime)/1000}] pid ${process.pid} finished serving URL ${request.url}`);
}
if (cluster.isPrimary) {
for (let i = 0; i < 4; i++) {
let worker = cluster.fork();
}
} else {
// the worker:
const server = createServer(requestListener);
server.listen(8000, 'localhost', () => {
console.info(`Server ${process.pid} is running`);
});
}
But this code doesn't run as expected. Although my system has 12 cores and is basically idle this is happening:
Running in terminal #1 curl -v http://127.0.0.1:8000/test - immediate result as expected
Running in terminal #1 curl -v http://127.0.0.1:8000/test - immediate result as expected
Running in terminal #2 curl -v http://127.0.0.1:8000/slow - immediate headers, slowly building up response text 0.1.2.3 and 100% CPU load, as expected
Running in terminal #1 curl -v http://127.0.0.1:8000/test - immediate result as expected
Running in terminal #1 curl -v http://127.0.0.1:8000/test - immediate result as expected
Running in terminal #1 curl -v http://127.0.0.1:8000/test - immediate result as expected
Running in terminal #1 curl -v http://127.0.0.1:8000/test - no immediate result as is is blocked till request 3. is finished
The console output shows exactly the problem:
Server 78797 is running
Server 78799 is running
Server 78798 is running
Server 78800 is running
[3.028] pid 78797 is serving URL /test
[3.029] pid 78797 finished serving URL /test
[3.515] pid 78799 is serving URL /test
[3.52] pid 78799 finished serving URL /test
[5.963] pid 78798 is serving URL /slow
[7.27] pid 78800 is serving URL /test
[7.272] pid 78800 finished serving URL /test
[7.976] pid 78797 is serving URL /test
[7.976] pid 78797 finished serving URL /test
[8.568] pid 78799 is serving URL /test
[8.569] pid 78799 finished serving URL /test
[32.492] pid 78798 finished serving URL /slow
[32.495] pid 78798 is serving URL /test
[32.495] pid 78798 finished serving URL /test
So why is the request at step 7. sent to the busy worker (pid 78798) although three other processes are idle and getting bored?
The manual promised me to do it a bit more intelligent than dumb round robin (https://nodejs.org/api/cluster.html#cluster_how_it_works):
The cluster module supports two methods of distributing incoming
connections.
The first one (and the default one on all platforms except Windows) is
the round-robin approach, where the primary process listens on a port,
accepts new connections and distributes them across the workers in a
round-robin fashion, with some built-in smarts to avoid overloading a
worker process.
How can I change that behavior, so that all requests are answered immediately, even when one of the processes of my cluster pool is completely busy?
Related
So I have some code which runs a command in a spawned child process. I do this using the execa module.
const childProcess = execa.command('yarn start');
const localhostStarted = await waitForLocalhost({ port: 8000 });
expect(localhostStarted.done).toBe(true);
childProcess.kill('SIGINT', { forceKillAfterTimeout: 2000 });
The yarn start command executes webpack-dev-server in another child process of its own. However when I kill the childProcess that I spawned, it does not automatically kill its spawned webpack-dev-server process. It is known to be an issue here https://github.com/webpack/webpack-dev-server/issues/2168.
To fix this I add manual listeners for SIGINT & SIGTERM inside my script which runs when yarn start is called
['SIGINT', 'SIGTERM'].forEach((signal) => {
console.log('registering events');
process.on(signal, () => {
console.log('received signal', signal);
devServer.close(() => {
console.log('exiting proces');
process.exit(0);
});
});
});
This fixes the issue on my local machine and when I kill child process I spawn, it kills all its descendents i.e the dev-server process too.
However, this fix still does not work on CI, and since the child process gets killed on CI but not the dev-server process, my tests dont exit and keeps hanging.
My local machine is OSX 10.15 but on CI we use ubuntu. If I change CI to use macos 10.15, then the fix works on CI too.
I am unable to find any docs on this issue which explains the different behaviour on why the signal of SIGTERM is not received by the dev-server process on ubuntu machines but receives fine on mac machines.
I run simple Angular Universal SSR (server-side rendering) application, everything works fine, server renders html but there is one problem. static assets, like fonts, images, icons doesn't get loaded by server, but browser. What I want to do, is to render html with static assets.
I tried express.static() function but couldn't make it work. So how can I make it work?
Got it working by suggestions here.
Implement HTTP interceptor according to this article. It will add absolute url to all requests with a relative path, so SSR with server running will work. But while static pre-rendering this.request will be empty, in this case you should redirect such requests to your own static server, e.g. http://localhost:3000
Create NodeJs script for pre-rendering. It will run static server on port 3000 (the same port as in interceptor), when it's running, you can execute npm run prerender in child process. Then listen to events error and close on the subprocess and close server when they happen:
const { spawn } = require('child_process');
// In static server 'listen' callback
const sp = spawn('npm', ['run', 'prerender'], { stdio: 'inherit', timeout: 5 * 60 * 1000 })
sp.on('error', (err) => {
// Pre-rendering failed.
// TODO kill subprocess, close server, end current process with error code
});
sp.on('exit', (code: number) => {
// Pre-rendering is finished
// TODO Close server
if (code !== 0) {
// Pre-rendering failed.
// TODO End current process with error code
}
})
I'm trying to run Node.js application. With SSH I am able to run 'node server.js':
var express = require('express');
var app = express();
var server = app.listen(5431);
app.use(express.static('public'));
console.log("### RUNNING ###");
var socket = require('socket.io');
var io = socket(server);
and it indeed logs and throws no errors. But when I open client-browser app, I get following output in console:
Failed to load resource: net::ERR_CONNECTION_REFUSED
GET http://localhost:5431/socket.io/?EIO=3&transport=polling&t=MWCsv9T net::ERR_CONNECTION_REFUSED
Client connects with:
var socket = io.connect('http://localhost:5431');
I've tried connecting both with IP and domain, with same result. App worked fine on local machine.
I've checked open ports with following php script:
for($i=0; $i<10000; $i++) {
portCheck($i);
}
function portCheck($port) {
if(stest('127.0.0.1', $port))
echo $port . "<br>";
}
function stest($ip, $portt) {
$fp = #fsockopen($ip, $portt, $errno, $errstr, 0.1);
if (!$fp) {
return false;
} else {
fclose($fp);
return true;
}
}
and I did get:
As it listed 5431 I'm assuming that port is indeed open.
I have no idea then what can cause this error.
var socket = io.connect('http://localhost:5431');
This means the client will connect to the local machine - where local is from the perspective of the system where script is executing. Since the script is executing in the browser it means that your application is expected to run on the clients machine where the browser is running.
Failed to load resource: net::ERR_CONNECTION_REFUSED
GET http://localhost:5431/socket.io/?EIO=3&transport=polling&t=MWCsv9T net::ERR_CONNECTION_REFUSED
While you don't explicitly say this a later statement (see below) suggests that this is done on a different machine than your application is running on. If your application is running on host A and your browser on host B then localhost inside the client browser (where script gets executed) refers to host B and not host A.
App worked fine on local machine
This is expected since in this case localhost is the machine where your application is actually running on (same machine for application and browser).
I am trying to debug a nodejs script which has dependencies on native bindings. That script also involves forking a child process. I am using valgrind to debug the memory issues with following options:
valgrind --leak-check=summary --show-leak-kinds=all --trace-children=yes --verbose node app.js
It only works if I set --trace-children=no, otherwise always failed.
I created following sample script to test the scenario and it seems valgrind does not work debugging child process running in node.
// main.js
var cp = require('child_process');
var child = cp.fork('./worker');
child.on('message', function(m) {
// Receive results from child process
console.log('received: ' + m);
});
// Send child process some work
child.send('Please up-case this string');
and
worker.js
process.on('message', function(m) {
// Do work (in this case just up-case the string
m = m.toUpperCase();
// Pass results back to parent process
process.send(m.toUpperCase(m));
});
And valgrind always failed with following error:
==10401== execve(0x1048a3150(/bin/bash), 0x1048a3638, 0x1048a3658) failed, errno 2
==10401== EXEC FAILED: I can't recover from execve() failing, so I'm dying.
==10401== Add more stringent tests in PRE(sys_execve), or work out how to recover.
When using forever to run a node.js program as a daemon
i.e.
forever start myNodeTask
If the daemon (myNodeTask) decides it needs to exit, what is the correct way to do so?
If I just call process.exit() the program does terminate but it doesn't delete the forever log file, which leads me to believe that I need the program to exit in a more forever-friendly manner.
The node tasks I'm running are plain tcp servers that stream data to connected clients, not web servers.
The forever module always keeps the log files, even after a process has finished. There is no forever-friendly manner to delete those files.
But, you could use the forever-monitor module, which allow you to programatically use forever (from the docs):
var forever = require('forever-monitor'),
fs = require('fs');
var child = new (forever.Monitor)('your-filename.js', {
max: 3,
silent: true,
options: []
});
child.on('exit', function () {
console.log('your-filename.js has exited after 3 restarts');
// here you can delete your log file
fs.unlink('path_to_your_log_file', function (err) {
// do something amazing
});
});
child.start();