kill a looped task nicely from a jest test - node.js

I have a worker method 'doSomeWork' that is called in a loop, based on a flag that will be changed if a signal to terminate is received.
let RUNNING = true;
let pid;
export function getPid() {
return pid;
}
export async function doSomeWork() {
console.log("Doing some work!");
}
export const run = async () => {
console.log("starting run process with PID %s", process.pid);
pid = process.pid;
while (RUNNING) {
await doSomeWork();
}
console.log("done");
};
run()
.then(() => {
console.log("finished");
})
.catch((e) => console.error("failed", e));
process.on("SIGTERM", () => {
RUNNING = false;
});
I am happy with this and now need to write a test: I want to
trigger the loop
inject a 'SIGTERM' to the src process
give the loop a chance to finish nicely
see 'finished' in the logs to know that the run method has been killed.
here is my attempt (not working) The test code all executes, but the src loop isn't killed.
import * as main from "../src/program";
describe("main", () => {
it("a test", () => {
main.run();
setTimeout(function () {
console.log("5 seconds have passed - killing now!");
const mainProcessPid = main.getPid();
process.kill(mainProcessPid, "SIGTERM");
}, 5000);
setTimeout(function () {
console.log("5 secs of tidy up time has passed");
}, 5000);
});
});
I think the setTimeout isn't blocking the test thread, but I am not sure how to achieve this in node/TS.
sandbox at https://codesandbox.io/s/exciting-voice-goncm
update sandbox with correct environment: https://codesandbox.io/s/keen-bartik-ltjtx
any help appreciated :-)
--update---
I now see that process.kill isn't doing what I thought it was - even when I pass in the PID. will try creating a process as a child of the test process, so I can send a signal to it then. https://medium.com/#NorbertdeLangen/communicating-between-nodejs-processes-4e68be42b917

You are getting this issue because the Environment in your codesandbox is create-react-app i.e. it's a client side script and not a server-side instance of node.
Recreate you project but select as your environment node HTTP server, this will give you a node environment where the node process functions will work e.g. process.kill. This is because the node environment is run in a server-side Docker container. See here for more info on Codesandbox's environments.

Related

Abandoned http requests after server.close()?

I have a vanilla nodejs server like this:
let someVar // to be set to a Promise
const getData = url => {
return new Promise((resolve, reject) => {
https.get(
url,
{ headers: { ...COMMON_REQUEST_HEADERS, 'X-Request-Time': '' + Date.now() } },
res => {
if (res.statusCode === 401) return reject(new RequestError(INVALID_KEY, res))
if (res.statusCode !== 200) return reject(new RequestError(BAD_REQUEST, res))
let json = ''
res.on('data', chunk => json += chunk)
res.on('end', () => {
try {
resolve(JSON.parse(json).data)
} catch (error) {
return reject(new RequestError(INVALID_RESPONSE, res, json))
}
})
}
).on('error', error => reject(new RequestError(FAILED, error)))
})
}
const aCallback = () =>
console.log('making api call')
someVar = getData('someApiEndpoint')
.then(data => { ... })
}
const main = () => {
const server = http.createServer(handleRequest)
anInterval = setInterval(aCallback, SOME_LENGTH_OF_TIME)
const exit = () => {
server.close(() => process.exit())
log('Server is closed')
}
process.on('SIGINT', exit)
process.on('SIGTERM', exit)
process.on('uncaughtException', (err, origin) => {
log(`Process caught unhandled exception ${err} ${origin}`, 'ERROR')
})
}
main()
I was running into a situation where I would ctrl-c and would see the Server is closed log, followed by my command prompt, but then I would see more logs printed indicting that more api calls are being made.
Calling clearInterval(anInterval) inside exit() (before server.close()) seems to have solved the issue of the interval continuing even when the server is closed, so that's good. BUT:
From these node docs:
Closes all connections connected to this server which are not sending a request or waiting for a response.
I.e., I assume server.close() will not automatically kill the http request.
What happens to the http response information when my computer / node are no longer keeping track of the variable someVar?
What are the consequences of not specifically killing the thread that made the http request (and is waiting for the response)?
Is there a best practice for cancelling the request?
What does that consist of (i.e. would I ultimately tell the API's servers 'never mind please don't send anything', or would I just instruct node to not receive any new information)?
There are a couple things you should be aware of. First off, handling SIGINT is a complicated thing in software. Next, you should never need to call process.exit() as node will always exit when it's ready. If your process doesn't exit correctly, that means there is "work being done" that you need to stop. As soon as there is no more work to be done, node will safely exit on its own. This is best explained by example. Let's start with this simple program:
const interval = setInterval(() => console.log('Hello'), 5000);
If you run this program and then press Ctrl + C (which sends the SIGINT signal), node will automatically clear the interval for you and exit (well... it's more of a "fatal" exit, but that's beyond the scope of this answer). This auto-exit behavior changes as soon as you listen for the SIGINT event:
const interval = setInterval(() => console.log('Hello'), 5000);
process.on('SIGINT', () => {
console.log('SIGINT received');
});
Now if you run this program and press Ctrl + C, you will see the "SIGINT received" message, but the process will never exit. When you listen for SIGINT, you are telling node "hey, I have some things I need to cleanup before you exit". Node will then wait for any "ongoing work" to finish before it exits. If node doesn't eventually exit on it's own, it's telling you "hey, I can see that there are some things still running - you need to stop them before I'll exit".
Let's see what happens if we clear the interval:
const interval = setInterval(() => console.log('Hello'), 5000);
process.on('SIGINT', () => {
console.log('SIGINT received');
clearInterval(interval);
});
Now if you run this program and press Ctrl + C, you will see the "SIGINT received" message and the process will exit nicely. As soon as we clear the interval, node is smart enough to see that nothing is happening, and it exits. The important lesson here is that if you listen for SIGINT, it's on you to wait for any tasks to finish, and you should never need to call process.exit().
As far as how this relates to your code, you have 3 things going on:
http server listening for requests
an interval
outgoing https.get request
When your program exits, it's on you to clean up the above items. In the most simple of circumstances, you should do the following:
close the server: server.close();
clear the interval: clearInterval(anInterval);
destroy any outgoing request: request.destroy()
You may decide to wait for any incoming requests to finish before closing your server, or you may want to listen for the 'close' event on your outgoing request in order to detect any lost connection. That's on you. You should read about the methods and events which are available in the node http docs. Hopefully by now you are starting to see how SIGINT is a complicated matter in software. Good luck.

Can NestJS lifecycle methods run in parallel rather than in sequence?

We have a number of "poller" class instances running on a dedicated deploy of our app. When SIGTERM is received, we want each of these pollers to gracefully shutdown. We've implemented an async beforeApplicationShutdown method on our base poller class to that effect.
For our use case, each of these methods can be run in parallel. But it seems they are run in sequence by NestJS.
A negative consequence of this behavior is that the app can take a long time to spin down. ECS gives us 30 seconds from SIGTERM to SIGKILL, which we could extend as we add more pollers, but I'd rather not lengthen our deploy times.
I ended up implementing a solution like this:
const app: NestApplication;
// start the app, etc...
process.on('SIGTERM', async () => {
const pollers = getAllPollers(app);
// In parallel, shut down all pollers.
await Promise.all(pollers.map((poller) => poller.stopPolling()));
await app.close();
process.exit(0);
});
function getAllPollers(app: NestApplication): BasePoller[] {
const pollers: BasePoller[] = [];
app
.get(DiscoveryService)
.getProviders()
.forEach((instanceWrapper) => {
const { instance } = instanceWrapper;
if (instance instanceof BasePoller) {
pollers.push(instance);
}
});
return pollers;
}

How to stop async code from running Node.JS

I'm creating a program where I constantly run and stop async code, but I need a good way to stop the code.
Currently, I have tried to methods:
Method 1:
When a method is running, and another method is called to stop the first method, I start an infinite loop to stop that code from running and then remove the method from the queue(array)
I'm 100% sure that this is the worst way to accomplish it, and it works very buggy.
Code:
class test{
async Start(){
const response = await request(options);
if(stopped){
while(true){
await timeout(10)
}
}
}
}
Code 2:
var tests = [];
Start(){
const test = new test();
tests.push(test)
tests.Start();
}
Stop(){
tests.forEach((t, i) => {t.stopped = true;};
tests = [];
}
Method 2:
I load the different methods into Workers, and when I need to stop the code, I just terminate the Worker.
It always takes a lot of time(1 sec) to create the Worker, and therefore not the best way, since I need the code to run without 1-2 sec pauses.
Code:
const Worker = require("tiny-worker");
const code = new Worker(path.resolve(__dirname, "./Code/Code.js"))
Stopping:
code.terminate()
Is there any other way that I can stop async code?
The program contains Request using nodejs Request-promise module, so program is waiting for requests, it's hard to stop the code without one of the 2 methods.
Is there any other way that I can stop async code?
Keep in mind the basic of how Nodejs works. I think there is some misunderstanding here.
It execute the actual function in the actual context, if encounters an async operation the event loop will schedule it's execetution somewhere in the future. There is no way to remove that scheduled execution.
More info on event loop here.
In general for manage this kind of situations you shuold use flags or semaphores.
The program contains Request using nodejs Request-promise module, so program is waiting for requests, it's hard to stop the code
If you need to hard "stop the code" you can do something like
func stop() {
process.exit()
}
But if i'm getting it right, you're launching requests every x time, at some point you need to stop sending the request without managing the response.
You can't de-schedule the response managemente portion, but you can add some logic in it to (when it will be runned) check if the "request loop" has been stopped.
let loop_is_stopped = false
let sending_loop = null
func sendRequest() {
const response = await request(options) // "wait here"
// following lines are scheduled after the request promise is resolved
if (loop_is_stopped) {
return
}
// do something with the response
}
func start() {
sending_loop = setInterval(sendRequest, 1000)
}
func stop() {
loop_is_stopped = true
clearInterval(sending_loop)
}
module.exports = { start, stop }
We can use Promise.all without killing whole app (process.exit()), here is my example (you can use another trigger for calling controller.abort()):
const controller = new AbortController();
class Workflow {
static async startTask() {
await new Promise((res) => setTimeout(() => {
res(console.log('RESOLVE'))
}, 3000))
}
}
class ScheduleTask {
static async start() {
return await Promise.all([
new Promise((_res, rej) => { if (controller.signal.aborted) return rej('YAY') }),
Workflow.startTask()
])
}
}
setTimeout(() => {
controller.abort()
console.log("ABORTED!!!");
}, 1500)
const run = async () => {
try {
await ScheduleTask.start()
console.log("DONE")
} catch (err) {
console.log("ERROR", err.name)
}
}
run()
// ABORTED!!!
// RESOLVE
"DONE" will never be showen.
res will be complited
Maybe would be better to run your code as script with it's own process.pid and when we need to interrupt this functionality we can kill this process by pid in another place of your code process.kill.

How to mock test a Node.js CLI with Jest?

I'm stuck at the very beginning, simply requiring the CLI and capturing its output. I've tried two methods but both don't work.
This is my cli.js:
#!/usr/bin/env node
console.log('Testing...');
process.exit(0);
And this my cli.test.js:
test('Attempt 1', () => {
let stdout = require("test-console").stdout;
let output = stdout.inspectSync(function() {
require('./cli.js');
});
expect(output).toBe('Testing...');
});
test('Attempt 2', () => {
console.log = jest.fn();
require('./cli.js');
expect(console.log.calls).toBe(['Testing...']);
});
Doesn't really matter which test is actually being run, the output is always:
$ jest
RUNS bin/cli.test.js
Done in 3.10s.
Node.js CLI applications are no different to other applications except their reliance on environment. They are expected to extensively use process members, e.g.:
process.stdin
process.stdout
process.argv
process.exit
If any of these things are used, they should be mocked and tested accordingly.
Since console.log is called directly for output, there's no problem to spy on it directly, although helper packages like test-console can be used too.
In this case process.exit(0) is called in imported file, so spec file early exits, and next Done output is from parent process. It should be stubbed. Throwing the error is necessary so that code execution is stopped - to mimic the normal behavior:
test('Attempt 2', () => {
const spy = jest.spyOn(console, 'log');
jest.spyOn(process, 'exit').mockImplementationOnce(() => {
throw new Error('process.exit() was called.')
});
expect(() => {
require('./cli.js');
}).toThrow('process.exit() was called.');
expect(spy.mock.calls).toEqual([['Testing...']]);
expect(process.exit).toHaveBeenCalledWith(0);
});

Node.js child process isn't receiving stdin unless I close the stdin stream

I'm building a discord bot that wraps a terraria server in node.js so server users can restart the server and similar actions. I've managed to finish half the job, but I can't seem to create a command to execute commands on the terraria server. I've set it to write the command to the stdin of the child process and some basic debugging verifies that it does, but nothing apparently happens.
In the Node.js docs for child process stdin, it says "Note that if a child process waits to read all of its input, the child will not continue until this stream has been closed via end()." This seems likely to be the problem, as calling the end() function on it does actually send the command as expected. That said, it seems hard to believe that I'm unable to continuously send commands to stdin without having to close it.
Is this actually the problem, and if so what are my options for solving it? My code may be found below.
const discordjs = require("discord.js");
const child_process = require("child_process");
const tokens = require("./tokens");
const client = new discordjs.Client();
const terrariaServerPath = "C:\\Program Files (x86)\\Steam\\steamapps\\common\\Terraria\\TerrariaServer.exe"
const terrariaArgs = ['-port', '7777', "-maxplayers", "8", "-world", "test.wld"]
var child = child_process.spawn(terrariaServerPath, terrariaArgs);
client.on('ready', () => {
console.log(`Logged in as ${client.user.tag}!`);
});
client.on('disconnect', () => {
client.destroy();
});
client.on('message', msg => {
if (msg.channel.name === 'terraria') {
var msgSplit = msg.content.split(" ");
if (msgSplit[0] === "!restart") {
child.kill();
child = child_process.spawn(terrariaServerPath, terrariaArgs);
registerStdio();
msg.reply("restarting server")
}
if (msgSplit[0] === "!exec") {
msg.reply(msgSplit[1]);
child.stdin.write(msgSplit[1] + "\n");
child.stdin.end();
}
}
});
client.login(tokens.discord_token);
var registerStdio = function () {
child.stdout.on('data', (data) => {
console.log(`${data}`);
});
child.stderr.on('data', (data) => {
console.error(`${data}`);
});
}
registerStdio();
I was able to solve the problem by using the library node-pty. As near as I can tell, the problem was that the child process was not reading the stdin itself and I was unable to flush it. Node-pty creates a virtual terminal object which can be written to instead of stdin. This object does not buffer writes and so any input is immediately sent to the program.

Resources