Is it possible to intercept the default kill signal and use it as a command for a graceful shutdown? This is for Solaris SMF. The easiest way to have a stoppable service that I have found is to set :kill as the shutdown script and then to add a shutdown hook in Java. In this case, I want to do it for Node.JS. How should I do it?
Edit: The purpose is to
Stop receiving new requests.
Give existing callbacks a few seconds to finish.
Write some information to stderr.
#alienhard's first suggestion was to use process.on('exit'... but it seems that I would not be able to accomplish number 2 with this method.
There is an exit event: http://nodejs.org/docs/v0.3.1/api/process.html#event_exit_
process.on('exit', function() {
console.log('About to exit.');
});
Edit: An alternative that could work for you, is instead of killing the process, sending a signal like SIGUSR1 (kill -s SIGUSR1), and then listening for this signal (see link posted by #masylum in another answer) and after you are done or some time has elapsed explicitly terminate with process.exit().
The only thing that comes to my mind is using signal events.
http://nodejs.org/docs/v0.3.1/api/process.html#signal_Events
Related
I am dealing with an odd problem which I couldn't find the answer to online, nor through a lot of trial and error.
In a multi-multi process cluster, forked worker processes can run arbitrarily long commands, but the parent process listens for keepalive messages sent by workers, and kills workers that are stuck for longer than X seconds.
Worker processes can asynchronously communicate with the rest of the world (using http, or process.send ipc communication), but on exit, I'd like to be able to communicate some things (typically, queued logs or error details).
Most online documentation for process.on('exit', handler) indicates usage of console.log, however it seems like forked processes don't inherit a normal stdout, and the console.log isn't a direct tty, it's a stream (the ipc stream, I presume?).
Because of this, the process exit handler doesn't let me use console.log to log extra lines (or if it does, I'm not sure where these lines end up)
I tried various combinations of fork options (silent/not silent, non-default stdio options like inherit), using fs.write to write to tty or a real file, using process.send, or but in no case, was I able to get the on-exit handler to log anywhere visible.
How can I get the forked process to successfully log on exit?
small additional points - all this testing is on unix-like systems (macos , amazon linux...) and both parent and child processes are fired with --sigint-trace so that we can get at least the top 10 stack frames of the interrupted process on exit. These frames do make it out to the terminal successfully
This was a bit of a misunderstanding about how SIGINT is handled, and I believe that it's impossible to accomplish what I want here, but I'd love to hear if someone else found a solution.
Node has its own SIGINT handler which is "more powerful" than custom SIGINT handlers - typically it interrupts infinite loops, which is extremely useful in the case where code is blocked by long-running operations.
Node allows one-upping its own SIGINT debugging capabilities by attaching a --trace-sigint flag which captures the last frames of execution.
If I understood this correctly, there are 4 cases with different behavior
No custom handler, event loop blocked
process is terminated without any further code execution. (and --trace-sigint can give a few stack traces)
No custom handler, event loop not blocked
normal exit flow, process.on('exit') event fires.
Custom handler, event loop blocked
nothing happens until event loop unblocks (if it does), then normal exit flow
Custom handler, event loop not blocked
normal exit flow.
This happens regardless of the way the process is started, and it's not a problem about pipes or exit events - in the case where the event loop is blocked and the native signal handler is in place, the process terminates without any further execution.
It would seem like there is no way to both get a forced process exit during a blocked event loop, AND still get node code to run on the same process after the native interruption to recover more information.
Given this, I believe the best way to recover information from the stuck process is to stream data out of it before it freezes (sounds obvious, but brings a lot of extra considerations in production environments).
I'm writing a very demanding program in rust that had variable Threads for processing very important data, and want to know if there is a way that i can send a signal to stop it with systemctl in a way that i can be sure that it is finishing it's dutties before stop, as its very demanding, uses http_request, and threads are variables I can not make an estimation of how much time i have to wait since signal sended until the process is dead.
In esscense, it is a daemon that is in a loop until a variable sets false, like this
loop {
// Process goes here
if !is_alive {
break;
}
}
What i'm doing right now is that the program is asking a "config.json" file if it is "alive", but i think it's not the best way because i don't know when the program stops, just can see that is stoping, but not how much is going to last, and if i do this way, systemctl is going to show the service alive, even if i shuted it down manually.
If you want to experiment with Systemd service behavior, I would take a look at the Systemd documentation. In this case, I would direct you to the section about TimeoutStopSec.
According to the documentation, you can disable any timeout on systemd stop commands with TimeoutStopSec=infinity. This combined with actually handling the SIGTERM signal that systemd uses by default should do the trick.
Furthermore, there is the KillSignal option by which you can specify the signal that is sent to your program to stop it or ExecStop to specify a program to run in order to stop your service.
With these you should be able to figure it out, I hope.
Is there a way to execute a piece of code in Node.js Express just before the node.js process exits, regardless whether it was due to an error being thrown, or pressing Ctrl+C, or any other reason?
You're looking for the exit event. From the documentation:
Emitted when the process is about to exit. This is a good hook to
perform constant time checks of the module's state (like for unit
tests). The main event loop will no longer be run after the 'exit'
callback finishes, so timers may not be scheduled.
And it would be implemented as
process.on('exit', function() {
console.log('About to close');
});
It's worth mentioning, that it's generally not a good idea to try to manipulate data if you don't know the reason for the exit or exception. If you don't know what's wrong, it's usually a better idea to start fresh, than try to accomplish something with something that may very well be FUBAR.
If I'm writing something simple and want it to run until explicitly terminated, is there a best practice to prevent script termination without causing blocking, using CPU time or preventing callbacks from working?
I'm assuming at that point I'd need some kind of event loop implementation or a way to unblock the execution of events that come in from other async handlers (network io, message queues)?
A specific example might be something along the lines of "I want my node script to sleep until a job is available via Beanstalkd".
I think the relevant counter-question is "How are you checking for the exit condition?".
If you're polling a web service, then the underlying setInterval() for the poll will keep it alive until cancelled. If you're taking in input from a stream, that should keep it alive until the stream closes, etc.
Basically, you must be monitoring something in order to know whether or not you should exit. That monitoring should be the thing keeping the script alive.
Node.js end when it have nothing else to do.
If you listen on a port, it have something to do, and a way to receive beanstalk command, so it will wait.
Create a function that close the port and you ll have your explicit exit, but it will wait for all current job to end before closing.
Node js module during the operation requests some resources on remote service, that better be released when it exits. We know that there is very nice:
process.on('exit', function() {
// ...
});
But then it is said that it won't wait for any async operations to complete. So the question is if there's any workaround (there should be some, since it's quite widespread usage case)? Maybe one could start separate process or something?..
Only workaround I've seen is adding a wait loop and not finishing/returning from the .on('exit', function until a property has been updated globally.
Totally a bodge-job design-wise, very bad practice, but I've seen it work for short calls (I think there is some timeout but I never bothered to look into the details).
I think you could/should do clean-up before on('exit') by listening for ctrl-c signal like in this post.