Control-C not caught in Node.js - node.js

I thought if I run this and then hit control-C, the program should exit after displaying "Exiting...". It does not.
Of course, I want to do a lot more than console.log in the real application.
process.on('SIGINT', function() {
console.log("Exiting...");
process.exit();
});
while(1);
It does catch but does not exit. I have to kill the process separately.
Node version 8.x LTS
EDIT:
The edit is to make one of my comments below clear.As is made clear in the accepted answer, my signal-handler was overwriting the default one but it was NEVER getting executed. The fact that Cntl-C was not killing the process gave me the impression that the signal-handler was actually executing. It had merely overwritten the built-in handler. THE ANSWER IS TRULY INFORMATIVE - PACKED WITH INFO IN A FEW WORDS.

while(1) is hanging onto the process. Change it to:
setInterval(() => {}, 1000);
And it behaves as you would like.
I presume you used while(1) as a placeholder for a running program, but it's not an accurate representation. A normal node app would not hold the process synchronously like that.
It's probably worth noting that when you execute process.on('SIGINT', ... you are pre-empting node's normal SIGINT handler, which would have exited on ctrl-C even if while(1) was holding the process. By adding your own handler, your code will run when node gets to it, which would be after the current synchronous event cycle, which in this case never happens.

Related

Port not released when I have a custom process.on('exit') in nodejs

When I generally use Ctrl+c(mac) to stop a node process, the process is killed and port is released. When I am using process.on('SIGINT'), the process is not killed automatically, I have to manually kill the process in the port. Why is this behaviour?
Am I overwriting any default exit code snippet from executing?
process.on('SIGINT', (code) => {
console.log(`About to exit with code: ${code}`);
});
As per Node's documentation:
'SIGTERM' and 'SIGINT' have default handlers on non-Windows platforms that reset the terminal mode before exiting with code 128 + signal number. If one of these signals has a listener installed, its default behavior will be removed (Node.js will no longer exit).
So yes, you are overwriting the default behaviour. If - when - you're done with handling the signal, you may call process.exit() to actually terminate the program - after possible listeners on 'exit' have run.

Completely clear spawn

Is it necessary to null the spawn after pause and kill?
let child = spawn(cmd_str);
child.on('exit', code => {
child.stdin.pause();
child.kill();
child = null;
});
I don't want my module to have a chance to take the extra resources from the system after doing it's job.
No, it is not necessary.
JavaScript has a garbage collector fonctionnality that takes care of cleaning memory for variable that are not used any more.
One thing you could do instead to be sure the subprocess has been killed with success is to listen to the close or error event, as shown in the example from the NodeJS doc.
You could also track the process by its PID to make sure it is not alive anymore.
Note that I don't fully understand why you need to kill your process on exit, you should maybe focus on a clean exit of the subprocess program.

Node.js - process.exit() vs childProcess.kill()

I have a node application that runs long running tasks so whenever a task runs a child process is forked to run the task. The code creates a fork for the task to be run and sends a message to the child process to start.
Originally, when the task was complete, I was sending a message back to the parent process and the parent process would call .kill() on the child process. I noticed in my activity monitor that the node processes weren't being removed. All the child processes were hanging around. So, instead of sending a message to the parent and calling .kill(), I called process.exit() in the child process code once the task was complete.
The second approach seems to work fine and I see the node processes being removed from the activity monitor but I'm wondering if there is a downside to this approach that I don't know about. Is one method better than the other? What's the difference between the 2 methods?
My code looks like this for the messaging approach.
//Parent process
const forked = fork('./app/jobs/onlineConcurrency.js');
forked.send({clientId: clientData.clientId,
schoolYear: schoolYear
});
forked.on("message", (msg) => {
console.log("message", msg);
forked.kill();
});
//child Process
process.on('message', (data) => {
console.log("Message recieved");
onlineConcurrencyJob(data.clientId, data.schoolYear, function() {
console.log("Killing process");
process.send("done");
});
})
The code looks like this for the child process when just exiting
//child Process
process.on('message', (data) => {
console.log("Message received");
onlineConcurrencyJob(data.clientId, data.schoolYear, function() {
console.log("Killing process");
process.exit();
});
})
kill sends a signal to the child process. Without an argument, it sends a SIGTERM (where TERM is short for "termination"), which typically, as the name suggests, terminates the process.
However, sending a signal like that is a forced method of stopping a process. If the process is performing tasks like writing to a file, and it receives a termination signal, it might cause file corruption because the process doesn't get a chance to write all data to the file, and close it (there are mitigations for this, like installing a signal handler that can be used to "catch" signals and ignore them, or finish all tasks before exiting, but this requires explicit code to be added to the child process).
Whereas with process.exit(), the process exits itself. And typically, it does so at a point where it knows that there are no more pending tasks, so it can exit cleanly. This is generally speaking the best way to stop a (child) process.
As for why the processes aren't being removed, I'm not sure. It could be that the parent process isn't cleaning up the resources for the child processes, but I would expect that to happen automatically (I don't even think you can perform so-called "child reaping" explicitly in Node.js).
Calling process.exit(0) is the best mechanism, though there are cases where you might want to .kill from the parent (eg. A distributed search where one node returning means all nodes can stop).
.kill is probably failing due to some handling of the signal it is getting. Try .kill('SIGTERM'), or even 'SIGKILL'.
Also note that subprocesses which aren't killed when the parent process exits will be moved to the grandparent process. See here for more info and a proposed workaround: https://github.com/nodejs/node/issues/13538
In summary, this is default Unix behavior, and the workaround is to process.on("exit", () => child.kill())

NodeJS child processes are terminated on SIGINT

Im creating NodeJS application, that creates quite a few child processes. They are started by both spawn and exec (based on lib implementation). Some examples may be GraphicsMagick (gm) for image manipulation or Tesseract (node-tesseract) for OCR. Now I would like to gracefully end my application so I created shutdown hook:
function exitHandler() {
killer.waitForShutdown().then(function(){
logger.logInfo("Exited successfully.");
process.exit();
}).catch(function(err) {
logger.logError(err, "Error during server shutdown.");
process.exit();
});
}
process.on('exit', exitHandler);
process.on('SIGINT', exitHandler);
process.on('SIGTERM', exitHandler);
Exit handling itself works fine, it is waiting well and so on, but there is a catch. All "native" (gm, tesseract, ...) processes that run at that time are also killed. Exception messages only consists of "Command failed" and then content of command which failed e.g.
"Command failed: /bin/sh -c tesseract tempfiles/W1KwFSdz7MKdJQQnUifQFKdfTRDvBF4VkdJgEvxZGITng7JZWcyPYw6imrw8JFVv/ocr_0.png /tmp/node-tesseract-49073e55-0ef6-482d-8e73-1d70161ce91a -l eng -psm 3\nTesseract Open Source OCR Engine v3.03 with Leptonica"
So at least for me, they do not tell anything useful. I'm also queuing process execution, so PC don't get overloaded by 50 processes at one time. When running processes are killed by SIGINT, new processes that were queued are started just fine and finishes successfully. I have problem only with those few running at the time of receiving SIGINT. This behavior is same on Linux (Debian 8) and Windows (W10). From what I read here, people usually have opposite problem (to kill child processes). I tried to search if stdin gets somehow piped into child processes but I can't find it. So is this how its supposed to work? Is there any trick to prevent this behavior?
The reason this happens is because, by default, the detached option is set to false. If detached is false, the signals will also be sent to the child processes, regardless of whether you setup an event listener.
To stop this happening, you need to change your spawn calls to use the third argument in order to specify detached; for example:
spawn('ls', ['-l'], { detached: true })
From the Node documentation:
On Windows, setting options.detached to true makes it possible for the
child process to continue running after the parent exits. The child
will have its own console window. Once enabled for a child process, it
cannot be disabled.
On non-Windows platforms, if options.detached is set to true, the
child process will be made the leader of a new process group and
session. Note that child processes may continue running after the
parent exits regardless of whether they are detached or not. See
setsid(2) for more information.

Is the first thread that gets to run inside a Win32 process the "primary thread"? Need to understand the semantics

I create a process using CreateProcess() with the CREATE_SUSPENDED and then go ahead to create a little patch of code inside the remote process to load a DLL and call a function (exported by that DLL), using VirtualAllocEx() (with ..., MEM_RESERVE | MEM_COMMIT, PAGE_EXECUTE_READWRITE), WriteProcessMemory(), then call FlushInstructionCache() on that patch of memory with the code.
After that I call CreateRemoteThread() to invoke that code, creating me a hRemoteThread. I have verified that the remote code works as intended. Note: this code simply returns, it does not call any APIs other than LoadLibrary() and GetProcAddress(), followed by calling the exported stub function that currently simply returns a value that will then get passed on as the exit status of the thread.
Now comes the peculiar observation: remember that the PROCESS_INFORMATION::hThread is still suspended. When I simply ignore hRemoteThread's exit code and also don't wait for it to exit, all goes "fine". The routine that calls CreateRemoteThread() returns and PROCESS_INFORMATION::hThread gets resumed and the (remote) program actually gets to run.
However, if I call WaitForSingleObject(hRemoteThread, INFINITE) or do the following (which has the same effect):
DWORD exitCode = STILL_ACTIVE;
while(STILL_ACTIVE == exitCode)
{
Sleep(500);
if(!GetExitCodeThread(hRemoteThread, &exitCode))
break;
}
followed by CloseHandle() this leads to hRemoteThread finishing before PROCESS_INFORMATION::hThread gets resumed and the process simply "disappears". It is enough to allow hRemoteThread to finish somehow without PROCESS_INFORMATION::hThread to cause the process to die.
This looks suspiciously like a race condition, since under certain circumstances hRemoteThread may still be faster and the process would likely still "disappear", even if I leave the code as is.
Does that imply that the first thread that gets to run within a process becomes automatically the primary thread and that there are special rules for that primary thread?
I was always under the impression that a process finishes when its last thread dies, not when a particular thread dies.
Also note: there is no call to ExitProcess() involved here in any way, because hRemoteThread simply returns and PROCESS_INFORMATION::hThread is still suspended when I wait for hRemoteThread to return.
This happens on Windows XP SP3, 32bit.
Edit: I have just tried Sysinternals Process Monitor to see what's happening and I could verify my observations from before. The injected code does not crash or anything, instead I get to see that if I don't wait for the thread it doesn't exit before I close the program where the code got injected. I'm thinking whether the call to CloseHandle(hRemoteThread) should be postponed or something ...
Edit+1: it's not CloseHandle(). If I leave that out just for a test, the behavior doesn't change when waiting for the thread to finish.
The first thread to run isn't special.
For example, create a console app which creates a suspended thread and terminates the original thread (by calling ExitThread). This process never terminates (on Windows 7 anyway).
Or make the new thread wait for five seconds then exit. As expected, the process will live for five seconds and exit when the secondary thread terminates.
I don't know what's happening with your example. The easiest way to avoid the race is to make the new thread resume the original thread.
Speculating now, I do wonder if what you're doing isn't likely to cause problems anyway. For example, what happens to all the DllMain calls for the implicitly loaded DLLs? Are they unexpectedly happening on the wrong thread, are they being skipped, or are they postponed until after your code has run and the main thread starts?
Odds are good that the thread with the main (or equivalent) function calls ExitProcess (either explicitly or in its runtime library). ExitProcess, well, exits the entire process, including killing all threads. Since the main thread doesn't know about your injected code, it doesn't wait for it to finish.
I don't know that there's a good way to make the main thread wait for yours to complete...

Resources