tinylr/nodejs - how to access the currently running server - node.js

In the parent process, I have started the tiny-lr(livereload) server, followed by spawing a child process which looks for changes to the css files. how to pass on the livereload server to the child process or is it possible to query for the livereload server that is currently running in the child process so that I don't create it again getting an already in use error for the port.
the same case with node http server. can I know if the server is already running and use that instead of creating new one.

is it possible to query for the livereload - it is possible and may be implemented in more than one way.
Use stdout/stdin to communicate with the child process. For detailed description look HERE. Basically you can send messages from one process to the other and reply to them.
Use http.request to check if the port is in use.
You can use a file: the process with the server keeps the file open in the write mode - the content of the file stores the port on which the server runs (if needed).
You can use sockets for inter-process communication, as well.
Basically, none of the above guarantees 100% confidentiality, so you have to try/catch for errors anyway: the server may die just after your check, but before you wanted to do something with it.
how to pass on the livereload server to the child process - if you mean sharing an object between different process that it is for sure out of question; if you mean changing the ownership of the object that I am some 99,99% sure it is not possible neither.
What is the problem with having just one process responsible for running the server? And why not to use, let say, forever to take care of running and restarting the server, if needed?

Related

Websockets inside a PM2 cluster, ok in production?

Before going to production, we want to make sure that this is an "as expected behavior".
I have conducted an experiment by laucnhing 4 child processes using a PM2 cluster (I have 4 cores on my machine). Which means there were 4 websocket processes running...
Then on the client I created multiple sockets, and sent many messages to the server. One thing I didn't expect was that Node was able to figure out what child process the socket belonged to, meaning that every message sent by the client was console logged by the correct child process.
It seems like the main worker in the cluster keeps track of what sockets belong where.
So is this managed by Nodejs internally by the "cluster" module?
Also is this ok to use in production?
P.S. for websockets we use "ws" module for Nodejs
I aksed the same question on github. And got an answer...
Also please look into using ClusterWs - it's awesome!
https://github.com/ClusterWS/ClusterWS/issues/143

restart nodejs server programmatically

User case:
My nodejs server start with a configuration wizard that allow user to change the port and scheme. Even more, update the express routes
Question:
Is it possible to apply the such kind of configuration changes on the fly? restart the server can definitely bring all the changes online but i'm not sure how to trigger it from code.
Changing core configuration on the fly is rarely practiced. Node.js and most http frameworks do not support it neither at this point.
Modifying configuration and then restarting the server is completley valid solution and I suggest you to use it.
To restart server programatically you have to execute logics outside of the node.js, so that this process can continue once node.js process is killed. Granted you are running node.js server on Linux, the Bash script sounds like the best tool available for you.
Implementation will look something like this:
Client presses a switch somewhere on your site powered by node.js
Node.js then executes some JavaScript code which instructs your OS to execute some bash script, lets say it is script.sh
script.sh restarts node.js
Done
If any of the steps is difficult, ask about it. Though step 1 is something you are likely handling yourself already.
I know this question was asked a long time ago but since I ran into this problem I will share what I ended up doing.
For my problem I needed to restart the server since the user is allowed to change the port on their website. What I ended up doing is wrapping the whole server creation (https.createServer/server.listen) into a function called startServer(port). I would call this function at the end of the file with a default port. The user would change port by accessing endpoint /changePort?port=3000. That endpoint would call another function called restartServer(server,res,port) which would then call the startServer(port) with the new port then redirect user to that new site with the new port.
Much better than restarting the whole nodejs process.

Can I make a HTTP server listen on a shared socket, to allow no-downtime code upgrades?

I want to be able to kill my old process after a code update without any downtime.
In Ruby, I do this by using Unicorn. When you send the Unicorn master process a USR1 kill signal, it spawns a new copy of itself, which means all the libraries get loaded from file again. There's a callback available when a Unicorn master has loaded its libraries and its worker processes are ready to handle requests; you can then put code in here to kill the old master by its PID. The old master will then shut down its worker processes systematically, waiting for them to conclude any current requests. This means you can deploy code updates without dropping a single request, with 0 downtime.
Is it possible to do this in Node? There seem to be a lot of libraries vaguely to do with this sort of thing - frameworks that seem to just do mindless restarting of the process after a crash, and stuff like that - but I can't find anything that's a stripped-down implementation of this basic pattern. If possible I'd like to do it myself, and it wouldn't be that hard - I just need to be able to do http.createServer().listen() and specify a socket file (which I'll configure nginx to send requests to), rather than a port.
Both the net and http modules have versions of listen that take a path to a socket and fire their callback once the server has been bound.
Furthermore, you can use the new child_process.fork to launch a new Node process. This new process has a communication channel built in to its parent, so could easily tell its parent to exit once initialized.
net documentation
http documentation
child_process.fork documentation
(For the first two links, look just under the linked-to method, since they are all the same method name.)

NodeJS - Child node process?

I'm using NodeJS to run a socket server (using socket.io). When a client connects, I want am opening and running a module which does a bunch of stuff. Even though I am careful to try and catch as much as possible, when this module throws an error, it obviously takes down the entire socket server with it.
Is there a way I can separate the two so if the connected clients module script fails, it doesn't necessarily take down the entire server?
I'm assuming this is what child process is for, but the documentation doesn't mention starting other node instances.
I'd obviously need to kill the process if the client disconnected too.
I'm assuming these modules you're talking about are JS code. If so, you might want to try the vm module. This lets you run code in a separate context, and also gives you the ability to do a try / catch around execution of the specific code.
You can run node as a separate process and watch the data go by using spawn, then watch the stderr/stdout/exit events to track any progress. Then kill can be used to kill the process if the client disconnects. You're going to have to map clients and spawned processes though so their disconnect event will trigger the process close properly.
Finally the uncaughtException event can be used as a "catch-all" for any missed exceptions, making it so that the server doesn't get completely killed (signals are a bit of an exception of course).
As the other poster noted, you could leverage the 'vm' module, but as you might be able to tell from the rest of the response, doing so adds significant complexity.
Also, from the 'vm' doc:
Note that running untrusted code is a tricky business requiring great care.
To prevent accidental global variable leakage, vm.runInNewContext is quite
useful, but safely running untrusted code requires a separate process.
While I'm sure you could run a new nodejs instance in a child process, the best practice here is to understand where your application can and will fail, and then program defensively to handle all possible error conditions.
If some part of your code "take(s) down the entire ... server", then you really to understand why this occurred and solve that problem rather than rely on another process to shield you from the work required to design and build a production-quality service.

Load Balancing of Process in 1 Server

I have 1 process that receives incoming connection from port 1000 in 1 linux server. However, 1 process is not fast enough to handle all the incoming request.
I want to run multiple processes in the server but with 1 end-point. In this way, the client will only see 1 end-point/process not multiple.
I have checked LVS and other Load Balancing Solution. Those solutions seem geared towards multiple servers load-balancing.
Any other solution to help on my case?
i am looking something more like nginx where i will need to run multiple copies of my app.
Let me try it out.
Thanks for the help.
You also may want to go with a web server like nginx. It can load balance your app against multiple ports of the same app, and is commonly used to load balance Ruby on Rails apps (which are single threaded). The downside is that you need to run multiple copies of your app (one on each port) for this load balancing to work.
The question is a little unclear to me, but I suspect the answer you are looking for is to have a single process accepting tasks from the network, and then forking off 'worker processes' to actually perform the work (before returning the result to the user).
In that way, the work which is being done does not block the acceptance of more requests.
As you point out, the term load balancing carries the implication of multiple servers - what you want to look for is information about how to write a linux network daemon.
The two kes system calls you'll want to look at are called fork and exec.
It sounds like you just need to integrate your server with xinetd.
This is a server that listens on predefined ports (that you control through config) and forks off processes to handle the actual communication on that port.
You need multi-processing or multi-threading. You aren't specific on the details of the server, so I can't give you advice on what to do exactly. fork and exec as Matt suggested can be a solution, but really: what kind of protocol/server are we talking about?
i am thinking to run multiple application similar to ypops.
nginx is great but if you don't fancy a whole new web server, apache 2.2 with mod proxy balancer will do the same job
Perhaps you can modify your client to round-robin ports (say) 1000-1009 and run 10 copies of the process?
Alternatively there must be some way of internally refactoring it.
It's possible for several processes to listen to the same socket at once by having it opened before calling fork(), but (if it's a TCP socket) once accept() is called the resulting socket then belongs to whichever process successfully accepted the connection.
So essentially you could use:
Prefork, where you open the socket, fork a specified number of children which then share the load
Post-fork, where you have one master process which accepts all the connections and forks children to handle individual sockets
Threads - you can share the sockets in whatever way you like with those, as the file descriptors are not cloned, they're just available to any thread.

Resources