Zero downtime deploy with node.js and mongodb? - linux

I'm looking after building a global app from the ground up that can be updated and scaled transparently to the user.
The architecture so far is very easy, each part of the application has it own process and talk to other trough sockets.
This way i can spawn as many instances i want for each part of the application and distribute them across the globe accordingly to my needs.
In the front of the system i'll have a load balancer, which will them route the users to their closest instance, and when new code is spawned my instances will spawn new processes with the new code and route new requests to it and gracefully shutdown.
Thank you very much for any advice.
Edit:
The question is: What is the best ( and simplest ) solution for achieving zero downtime when deploying node to multiple instances ?
About the app:
https://github.com/Raynos/boot for "socket" connections,
http for http requests,
mongo for database
Solutions i'm trying at the moment:
https://www.npmjs.org/package/thalassa ( which managed haproxy configuration files and app instances ), if you don't know it, watch this talk: https://www.youtube.com/watch?v=k6QkNt4hZWQ and be aware crowsnest is being replaced by https://github.com/PearsonEducation/thalassa-consul

Deployment with zero downtime is only possible if the data you share between old and new nodes are compatible.
So in case you change the structure, you have to build a intermediate release, that can handle the old and new data structure without utilizing the new structure until you have replaced all nodes with that intermediate version. Then roll out the new version.
Taking nodes in and out of production can be done with your loadbalancer (and a grace time until all sessions expired on the nodes) (don't know enough about your application).

Related

Why is Azure MySQL database unresponsive at first

I have recently setup an 'Azure Database for MySQL flexible server' using the burstable tier. The database is queried by a React frontend via a node.js api; which each run on their own seperate Azure app services.
I've noticed that when I come to the app first thing in the morning, there is a delay before database queries complete. The React app is clearly running when I first come to it, which is serving the html front-end with no delays, but queries to the database do not return any data for maybe 15-30 seconds, like it is warming up. After this initial slow performance though, it then runs with no delays.
The database contains about 10 records at the moment, and 5 tables, so it's tiny.
This delay could conceivably be due to some delay with the node.js server, but as the React server is running on the same type of infrastructure (an app service), configured in the same way, and is immediately available when I go to its URL, I don't think this is the issue. I also have no such delays in my dev environment which runs on my local PC.
I therefore suspect there is some delay with the database server, but I'm not sure how to troubleshoot. Before I dive down that rabbit hole though, I was wondering whether a delay when you first start querying a database (after, say, 12 hours of inactivity) is simply a characteristic of the burtsable tier on Azure?
There may be more factors affecting this (see comments from people on my original question), but my solution has been to set two global variables which cache data, improving initial load times. The following should be set to ON in the Azure config:
'innodb_buffer_pool_dump_at_shutdown'
'innodb_buffer_pool_load_at_startup'
This is explained further in the following best practices documentation: https://learn.microsoft.com/en-us/azure/mysql/single-server/concept-performance-best-practices in the section marked 'Use InnoDB buffer pool Warmup'

Why does my Node express pm2 primary process in cluster mode never handle incoming requests?

I am running a node express app in pm2 cluster mode. Everything is working fine, however; I have noticed that incoming connections to my express routes only ever hit the forked worker app instances and never the primary (master) process.
In the pm2 documentation (https://pm2.keymetrics.io/docs/usage/cluster-mode/) on cluster mode they say
Under the hood, this uses the Node.js cluster module
In the "how it works" section on the Node.js website (https://nodejs.org/api/cluster.html#cluster_how_it_works) it says
The cluster module supports two methods of distributing incoming
connections. The first one (and the default one on all platforms
except Windows) is the round-robin approach, where the primary process
listens on a port, accepts new connections and distributes them across
the workers in a round-robin fashion, with some built-in smarts to
avoid overloading a worker process.
Does this mean the primary process will never actually handle any incoming requests? That can't be!! That would make the entire primary process a glorified load balancer and essentially a dead weight with a bunch of code and a full CPU never really getting used.
If the above IS accurate does that mean that the primary process is a bottleneck for all incoming express connections?
What am I understanding incorrectly or doing wrong that the primary (master) process never actually handles any requests please?
After I completely removed and re installed pm2 and then re-added all my node apps back in cluster mode via cli the first instance (app 0) started receiving messages. I didn't change any code so I'm not exactly sure what the issues was. Thank you to #JonePolvora for your time with comments that lead me to troubleshoot more.

Caching posts using redis

I have a forum which contains groups, new groups are created all the time by users, currently I'm using node-cache with ttl to cache groups and it's content (posts, likes and comments).
The server worked great at the begging but the performance decreased when more people start using the app, so I decided to use the node.js Cluster module as the next step to improve performance.
The node-cache will cause a consistency problem, the same group could be cached in two workers, so if one of them changed, the other will not know (unless you do).
The first solution that came to my mind is using redis to store the whole group and it's content with the help of redis datatypes (sets and hash objects), but I don't know how efficient this could be.
The other solution is using redis to map requests to the correct worker, in this case the cached data is distributed randomly in workers, so when a worker receives a request that related to some group, he checks the group owner(the worker that holds this group instance in-memory) in redis and ask him to get the wanted data using node-ipc and then return it to the user.
Is there any problem with the first solution?
The second solution does not provides a fairness (if all the popular groups landed in the same worker), is there a solution for this?
Any suggestions?
Thanks in advance

Node.js - child process handle urls related to itself

I'm trying to scale a chatter app using socket.io + cluster. Is it possible for child processes to handle incoming request belong to its process id (assigned when fork)?
For example:
http://mydomain/calculate?process=1
The above request is only handled by process 1, other processes will ignore it. In this way, I want to make sure requests of the same room are handled by same process, so I may don't have to use RedisStore as socket.io backend.
I also wonder how RedisStore works, because when using it, I found io.sockets.manager.rooms data are not accurate in all processes.
Edit:
Put it another way: can cluster master process dispatch request to different child processes based on the querystring?
The answer is no. The OS takes care of load balancing in this situation and in order to process query string you already have to be connected to a web server ( in your case child process ).
From my experience I find cluster a bit useless. It is a lot easier to spawn multiple NodeJS processes ( on multiple ports ) and put a proxy ( nginx? ) in front of them. It is easy and scalable.
As for socket.io: I don't think it works correctly with cluster ( because of sharing global variables, which causes issues ). Again: spawning separate NodeJS processes should fix the problem. Also it will be useful once you reach the point when you will have to scale to multiple machines. Any tricks with cluster won't help you at that point.
One last note: socket.io does not scale well. I suggest writing your own WebSocket server ( based on WS for example ) and implement your own scaling mechanism. For example based on all-to-all UDP pinging, which should scale well when dealing with small amount of servers ( 50? 100? ).

nodeJS multi node Web server

I need to create multi node web server that will be allow to control number of nodes in real time and change process UID and GUID.
For example at start server starts 5 workers and pushes them into workers pool.
When the server gets the new request it searches for free workers, sets UID or GUID if needed, and gives it the request to proces. In case if there is no free workers, server will create new one, set GUID or UID, also pushes it into pool and so on.
Can you suggest me how it can be implemented?
I've tried this example http://nodejs.ru/385 but it doesn't allow to control the number of workers, so I decided that there must be other solution but I can't find it.
If you have some examples or links that will help me to resolve this issue write me please.
I guess you are looking for this: http://learnboost.github.com/cluster/
I don't think cluster will do it for you.
What you want is to use one process per request.
Have in mind that this can be very innefficient, and node is designed to work around those types of worker processing, but if you really must do it, then you must do it.
On the other hand, node is very good at handling processes, so you need to keep a process pool, which is easily accomplished by using node internal child_process.spawn API.
Also, you will need a way for you to communicate to the worker process.
I suggest opening a unix-domain socket and sending the client connection file descriptor, so you can delegate that connection into the new worker.
Also, you will need to handle edge-cases for timeouts, etc.
https://github.com/pgte/fugue I use this.

Resources