Why node.js app created as a web server via http.createServer doesn't exit after end as simple console.log() app?
Is it because there is a forever while true {} cycle somewhere in http module?
Deep in the internals of Node.js there is bookkeeping being done. The number of active event listeners is being counted. Events and event-driven programming model are what make Node.js special. Events are also the life blood that keep a Node.js program alive.
A Node.js program will keep running as long as there are active event listeners present. After the last event listener has finished or otherwise terminated the Node.js program will also terminate.
For more details
GO HERE
This is the core of node, that while waiting for new connections, to not exit. Without using loops
There are many other ways, to keep node running, without forever while. For example:
window.setTimeout(function(){},10000000)
Related
I've been working on a program and it requre's restarting a certain application with Node.JS
I know Python is able to close another app but is Node.JS able to?
For example if a function is triggered I need it to close Spotify from the Node.JS process.
Thanks!
https://nodejs.org/api/process.html#process_process_kill_pid_signal
The process.kill(pid[, signal]) method sends the signal to the process identified by pid.
Of course, this assumes you already know the pid of the process you want to end. If you don't you'll have to find it first.
I have a question about how server.listen method keep node process running. Is there any setInterval method inside?
I have read answer in post How does `server.listen()` keep the node program running. But still didn't understand it.
Anyone know please explain to me. Thanks.
Node.js internally in libuv has some sort of counter of the number of open resources that are supposed to keep the process running. It's not only timers that count here. Any type of open TCP socket or listening server counts too as would other asynchronous operations such as an in-process file I/O operations. You can see calls in the node.js source to uv_ref() and uv_unref(). That's how code internal to node.js marks resources that should keep the process running or release them when done.
Whenever the event loop is empty meaning there is no pending event to run, node.js checks this counter in libuv and if it's zero, then it exits the process. If it's not zero, then something is still open that is supposed to keep the process running.
So, let's supposed you have an idle server running with a listening server and an empty event loop. The libuv counter will be non-zero so node.js does not exit the process. Now, some client tries to connect to your server. At the lowest level, the TCP interface of the OS notifies some native code in node.js that there's a client that just connected to your server. This native code then packages that up into a node.js event and adds it to the node.js event queue. That causes the libuv to wake up and process that event. It pulls it from the event queue and calls the JS callback associated with that event, cause some JS code in node.js to run. That will end up emitting an event on that server (of the eventEmitter type) which the JS code monitoring that server will receive and then JS code can start processing that incoming request.
So, at the lowest level, there is native code built into the TCP support in node.js that uses the OS-level TCP interface to get told by the OS that an incoming connection to your server has just been received. That gets translated into an event in the node.js event queue which causes the interpreter to run the Javascript callback associated with that event.
When that callback is done, node.js will again check the counter to see if the process should exit. Assuming the server is still running and has not has .unref() called on it which removes it from the counter, then node.js will see that there are still things running and the process should not exit.
It's running through the event loop.
Every time event loop is looking for any pending operation, the server.listen() operation come forward.
I have a node application that uses Socket.IO for the messaging.
And I run it using
node --expose_gc /path/to/app.js
Now, when I check on the htop utility, I noticed that instead of 1, I am getting multiple processes of the same command.
Can someone, in noob terms, explain to me why and what is going on here? I'm also worried that it may consume unexpected memory/cpu usage too.
socket.io does not fork or spawn any child processes.
usually sub processes that run node.js are spawned via cluster module but socket.io does no such thing.
it just adds a handler on top of a http server.
socket.io is just a library that hooks into a web server and listens for certain incoming requests (those requests that initiate a webSocket/socket.io connection). Once a socket.io connection is initiated, it just uses normal socket programming to send/receive messages.
It does not start up any additional processes by itself.
Your multiple processes are either because you accidentally started your own app multiple times without shutting it down or there is something else in your app that is starting up multiple processes. socket.io does not do that.
I want to be able to kill my old process after a code update without any downtime.
In Ruby, I do this by using Unicorn. When you send the Unicorn master process a USR1 kill signal, it spawns a new copy of itself, which means all the libraries get loaded from file again. There's a callback available when a Unicorn master has loaded its libraries and its worker processes are ready to handle requests; you can then put code in here to kill the old master by its PID. The old master will then shut down its worker processes systematically, waiting for them to conclude any current requests. This means you can deploy code updates without dropping a single request, with 0 downtime.
Is it possible to do this in Node? There seem to be a lot of libraries vaguely to do with this sort of thing - frameworks that seem to just do mindless restarting of the process after a crash, and stuff like that - but I can't find anything that's a stripped-down implementation of this basic pattern. If possible I'd like to do it myself, and it wouldn't be that hard - I just need to be able to do http.createServer().listen() and specify a socket file (which I'll configure nginx to send requests to), rather than a port.
Both the net and http modules have versions of listen that take a path to a socket and fire their callback once the server has been bound.
Furthermore, you can use the new child_process.fork to launch a new Node process. This new process has a communication channel built in to its parent, so could easily tell its parent to exit once initialized.
net documentation
http documentation
child_process.fork documentation
(For the first two links, look just under the linked-to method, since they are all the same method name.)
I'm using NodeJS to run a socket server (using socket.io). When a client connects, I want am opening and running a module which does a bunch of stuff. Even though I am careful to try and catch as much as possible, when this module throws an error, it obviously takes down the entire socket server with it.
Is there a way I can separate the two so if the connected clients module script fails, it doesn't necessarily take down the entire server?
I'm assuming this is what child process is for, but the documentation doesn't mention starting other node instances.
I'd obviously need to kill the process if the client disconnected too.
I'm assuming these modules you're talking about are JS code. If so, you might want to try the vm module. This lets you run code in a separate context, and also gives you the ability to do a try / catch around execution of the specific code.
You can run node as a separate process and watch the data go by using spawn, then watch the stderr/stdout/exit events to track any progress. Then kill can be used to kill the process if the client disconnects. You're going to have to map clients and spawned processes though so their disconnect event will trigger the process close properly.
Finally the uncaughtException event can be used as a "catch-all" for any missed exceptions, making it so that the server doesn't get completely killed (signals are a bit of an exception of course).
As the other poster noted, you could leverage the 'vm' module, but as you might be able to tell from the rest of the response, doing so adds significant complexity.
Also, from the 'vm' doc:
Note that running untrusted code is a tricky business requiring great care.
To prevent accidental global variable leakage, vm.runInNewContext is quite
useful, but safely running untrusted code requires a separate process.
While I'm sure you could run a new nodejs instance in a child process, the best practice here is to understand where your application can and will fail, and then program defensively to handle all possible error conditions.
If some part of your code "take(s) down the entire ... server", then you really to understand why this occurred and solve that problem rather than rely on another process to shield you from the work required to design and build a production-quality service.