I am a beginner in nodejs. I am trying to use nodejs in production. I wanted to achieve nodejs failover. As I am executing chat app, if a node server fails the chat should not breakup and should be automatically connected to a different nodeserver and the same socket id should be used for further chatting so that the chat message shouldn't go off. Is this can be achieved? Any samples.
I should not use Ngnix/HAProxy. Also let me know how the node servers should be: Either Active-Active or Active-Passive
PM2 is preferred to be the manager of process, especially the features of auto-failover, auto-scailing, auto-restart .
The introduction is as followed,
PM2 is a production process manager for Node.js applications with
a built-in load balancer. It allows you to keep applications alive
forever, to reload them without downtime and to facilitate common
system admin tasks.
Starting an application in production mode is as easy as:
$ pm2 start app.js
PM2 is constantly assailed by more than 700 tests.
Official website: http://pm2.keymetrics.io
Works on Linux (stable) & MacOSx (stable) & Windows (bĂȘta).
There's several problem's you're tackling at once there:
Daemonization - keeping your app up: As already mentioned, scripts such as forever can be used to supervise your nodeJS application to restart it on fail. This is good for starting the application in a worst-case failure.
Similarly recluster can be used to fork your application and make it more fault-resistant by creating a supervisor process and subprocesses.
Uncaught exceptions: A Known hurdle in nodejs is that asyncronous errors cannot be caught with a try/catch block. As a consequence exceptions can bubble up and cause your entire application to crash.
Rather than letting this occur, you should use domains to create a logical grouping of activities that are affected by the exception and handle it as appropriate. If you're running a webserver with state, an unhandled exception should probably be caught and the rest of the connections closed off gracefully before terminating the application.
(If you're running a stateless application, it may be possible to just ignore the exception and attempt to carry on; though this is not necessarily advisable. use it with care).
Security: This is a huge topic. You need to ensure at the very least:
Your application is running as a non-root user with least privileges. Ports < 1024 require root permissions. Typically this is proxied from a higher port with nginx, docker or similar.
You're using helmet and have hardened your application to the extent you can.
As an aside, I see you're using Apache in front of NodeJS, this isn't necessarily as apache will probably struggle under load with it's threading model more than nodeJS with it's event-loop model.
assuming you use a database for authenticating clients, there isn't much into it to accomplish, i mean, a script to manage state of the server script, like forever does,
it would try to start the script if it fails, more than that you should design the server script to handle every known and possible unknown error, any signal send to it , etc.
a small example would be with streams.
(Websocket Router)
|
|_(Chat Channel #1) \
|_(Chat Channel #2) - Channel Cache // hold last 15 messages of every channel
|_(Chat Channel #3) /
|_(Authentication handler) // login-logout
-- hope i helped in some way.
For a simple approach, I think you should build a reconnect mechanism in your client side and use a process management as forever or PM2 to manage your Node.js processes. I was trying too many ways but still can't overcome the socket issue, it's always killed whenever the process stops.
You could try usingPm2 start app.js -I 0. This would run your application in cluster mode creating many child processes for same thread. You can share socket information between various processes.
Related
I am running a node express app in pm2 cluster mode. Everything is working fine, however; I have noticed that incoming connections to my express routes only ever hit the forked worker app instances and never the primary (master) process.
In the pm2 documentation (https://pm2.keymetrics.io/docs/usage/cluster-mode/) on cluster mode they say
Under the hood, this uses the Node.js cluster module
In the "how it works" section on the Node.js website (https://nodejs.org/api/cluster.html#cluster_how_it_works) it says
The cluster module supports two methods of distributing incoming
connections. The first one (and the default one on all platforms
except Windows) is the round-robin approach, where the primary process
listens on a port, accepts new connections and distributes them across
the workers in a round-robin fashion, with some built-in smarts to
avoid overloading a worker process.
Does this mean the primary process will never actually handle any incoming requests? That can't be!! That would make the entire primary process a glorified load balancer and essentially a dead weight with a bunch of code and a full CPU never really getting used.
If the above IS accurate does that mean that the primary process is a bottleneck for all incoming express connections?
What am I understanding incorrectly or doing wrong that the primary (master) process never actually handles any requests please?
After I completely removed and re installed pm2 and then re-added all my node apps back in cluster mode via cli the first instance (app 0) started receiving messages. I didn't change any code so I'm not exactly sure what the issues was. Thank you to #JonePolvora for your time with comments that lead me to troubleshoot more.
I'm thinking about making a worker script to handle async tasks on my server, using a framework such as ReactPHP, Amp or Swoole that would be running permanently as a service (I haven't made my choice between these frameworks yet, so solutions involving any of these are helpful).
My web endpoints would still be managed by Apache + PHP-FPM as normal, and I want them to be able to send messages to the permanently running script to make it aware that an async job is ready to be processed ASAP.
Pseudo-code from a web endpoint:
$pdo->exec('INSERT INTO Jobs VALUES (...)');
$jobId = $pdo->lastInsertId();
notify_new_job_to_worker($jobId); // how?
How do you typically handle communication from PHP-FPM to the permanently running script in any of these frameworks? Do you set up a TCP / Unix Socket server and implement your own messaging protocol, or are there ready-made solutions to tackle this problem?
Note: In case you're wondering, I'm not planning to use a third-party message queue software, as I want async jobs to be stored as part of the database transaction (either the whole transaction is successful, including committing the pending job, or the whole transaction is discarded). This is my guarantee that no jobs will be lost. If, worst case scenario, the message cannot be sent to the running service, missed jobs may still be retrieved from the database at a later time.
If your worker "runs permanently" as a service, it should provide some API to interact through. I use AmPHP in my project for async services, and my services implement HTTP/Websockets servers (using Amp libraries) as an API transport.
Hey ReactPHP core team member here. It totally depends on what your ReactPHP/Amp/Swoole process does. Looking at your example my suggestion would be to use a message broker/queue like RabbitMQ. That way the process can pic it up when it's ready for it and ack it when it's done. If anything happens with your process in the mean time and dies it will retry as long as it hasn't acked the message. You can also do a small HTTP API but that doesn't guarantee reprocessing of messages on fatal failures. Ultimately it all depends on your design, all 3 projects are a toolset to build your own architectures and systems, it's all up to you.
I'm writing a socket.io based server in Node.js (6.9.0). I am using the builtin cluster module to enable multiple processes. For now, there is only two process: a master and a worker. The master receives the connections and maintains an in-memory global data structure (which the worker can query via IPC). The worker process does the majority of work by handling each incoming connection.
I am finding a hanging condition that I cannot attribute to any internal failure when the server is stressed at 300 concurrent users. Under lower concurrency, I don't see the hanging condition.
I'm enabling all forms of debugging (using the debug module: socket.io:socket, socket.io:client as well as my own custom calls to debug).
The last activity I can see is in socket.io, however, the messages indicate that sockets are closing ("reason client namespace disconnect") due to their own "end of test" cycle. It just seems like incoming connections are not be serviced.
I'm using Artillery.io as the test client.
In the server application, I have handlers for uncaught exceptions and try-catch blocks around everything.
In a prior iteration, I also used cluster, but reversed the responsibilities so that the master process handled the connections (with the worker handling global data). That didn't exhibit the same failure. Not sure if something is wrong with the connection distribution. For that, I have also dumped internalMessage events to monitor the internal workings of cluster.
I am not using any other module for connection distribution or sticky sessions. As there is only a single process handling connections (at this time), it doesn't seem relevant.
I was able to remove the hanging condition by changing the cluster scheduling policy from Round Robin (SCHED_RR) to None, which is OS specific (SCHED_NONE). I can't tell whether this is due to a bug in connection distribution (or something else inherent in the scheduling policy), but this one change seems to prevent the hanging condition.
I'm working on a service that needs to analyze data from social media networks every five minutes for different users. I'm developing it in node.js and I will implement it on Heroku.
According to this article on Heroku website, the best way to do that is separating the logic of the scheduler from the logic of the worker. In fact, the idea is to have one dyno dedicated to schedule tasks to avoid duplication. This dyno instructs a farm of workers (n dynos as needed) to do the tasks.
Here is the procfile of this architecture:
web: node web.js
worker: node worker.js
clock: node clock.js
The problem is how to implement it in node.js. I googled it, and the suggestion is to use message queue systems (like IronMQ, RabbitMQ or CloudAMQP). But I'm trying to set my code and app simple and with the minor need of add-ons.
The question is: is there a way to communicate directly from my scheduler (clock) to the worker dynos?
Thanks for your answers.
Heroku dynos do not have fixed IP addresses, so there is no way to open a direct connection between them. That's why you need to create a separate server instance with a static IP or other fixed endpoint that acts as a go-between.
You have at least two viable options: a RabbitMQ-type message queue, or a stripped down version using a pub-sub redis feed. I generally use the latter because it's quick, simple, and sufficiently robust for all my needs (e.g. if a message gets lost every once in a blue moon, it's no big deal). If, however, it is essential that you never lose a message, you should use a full-blown message queue like RabbitMQ.
Setting up the redis implementation is very straightforward. There are several redis add-ons (I use RedisCloud) with free and inexpensive plans. When you provision them, you get an endpoint to connect to and a password. Then you just connect your web dyno(s) and worker dyno(s) to your redis instance such that your web app publishes tasks to a channel and the worker subscribes to that channel.
If you need the web app to communicate with the client after task completion, you just create another channel for the worker to publish task completion messages and the web app to listen for them.
You'll never get duplication of tasks, as each time a worker receives a message it pops off the queue.
If I understood this correctly, you want to spin a clock as one app, and then spin workers as separate apps? Sure, there is a direct way. You open a connection from the clock app towards the worker app.
For example, have every worker open a client sockets connection to the clock. Then the clock can communicate to them and relay orders.
Or use WebRTC. That way the workers will talk to the clock, but they can also talk to each other.
Or make an (authenticated) HTTP(s) REST endpoint on the worker where it will receive tasks. Like, POST /tasks will create a task on the worker. If the task is short, it can reply right away, so that the clock knows the job is done. Or if it's a longer task, it can acknowledge it, but later call an endpoint on the clock to say it's done, something like PUT /tasks/32.
Or even more directly, open a direct net connection towards the clock, for example on worker start (and the other way around). Use dgram and send UDP messages between worker and clock.
In any way, I also believe that the people suggesting MQ like RabbitMQ is much better to just push jobs/tasks on. Then it can distribute tasks as needed, and based on unacked count on the job queue, it can spin up more workers when needed.
But your question is very broad, so to get more details, you could provide a little more details.
This might be helpful.
http://blog.andyjiang.com/intermediate-cron-jobs-with-heroku/
Basically you require the worker file directly into the clock file.
I solved in an easy way with the following three steps:
Set credit card information in Heroku account;
Installed Heroku Scheduler addon (you can use the command heroku addons:create scheduler:standard --app <yourAppName>)
Set up the script to run as scheduled job
More info here or here.
I need to create multi node web server that will be allow to control number of nodes in real time and change process UID and GUID.
For example at start server starts 5 workers and pushes them into workers pool.
When the server gets the new request it searches for free workers, sets UID or GUID if needed, and gives it the request to proces. In case if there is no free workers, server will create new one, set GUID or UID, also pushes it into pool and so on.
Can you suggest me how it can be implemented?
I've tried this example http://nodejs.ru/385 but it doesn't allow to control the number of workers, so I decided that there must be other solution but I can't find it.
If you have some examples or links that will help me to resolve this issue write me please.
I guess you are looking for this: http://learnboost.github.com/cluster/
I don't think cluster will do it for you.
What you want is to use one process per request.
Have in mind that this can be very innefficient, and node is designed to work around those types of worker processing, but if you really must do it, then you must do it.
On the other hand, node is very good at handling processes, so you need to keep a process pool, which is easily accomplished by using node internal child_process.spawn API.
Also, you will need a way for you to communicate to the worker process.
I suggest opening a unix-domain socket and sending the client connection file descriptor, so you can delegate that connection into the new worker.
Also, you will need to handle edge-cases for timeouts, etc.
https://github.com/pgte/fugue I use this.