How to make a distributed node.js application? - node.js

Creating a node.js application is simple enough.
var app = require('express')();
app.get('/',function(req,res){
res.send("Hello world!");
});
But suppose people became obsessed with your Hello World! application and exhausted your resources. How could this example be scaled up on practice? I don't understand it, because yes, you could open several node.js instance in different computers - but when someone access http://your_site.com/ it aims directly that specific machine, that specific port, that specific node process. So how?

There are many many ways to deal with this, but it boils down to 2 things:
being able to use more cores per server
being able to scale beyond more than one server.
node-cluster
For the first option, you can user node-cluster or the same solution as for the seconde option. node-cluster (http://nodejs.org/api/cluster.html) essentially is a built in way to fork the node process into one master and multiple workers. Typically, you'd want 1 master and n-1 to n workers (n being your number of available cores).
load balancers
The second option is to use a load balancer that distributes the requests amongst multiple workers (on the same server, or across servers).
Here you have multiple options as well. Here are a few:
a node based option: Load balancing with node.js using http-proxy
nginx: Node.js + Nginx - What now? (using more than one upstream server)
apache: (no clearly helpful link I could use, but a valid option)
One more thing, once you start having multiple processes serving requests, you can no longer use memory to store state, you need an additional service to store shared states, Redis (http://redis.io) is a popular choice, but by no means the only one.
If you use services such as cloudfoundry, heroku, and others, they set it up for you so you only have to worry about your app's logic (and using a service to deal with shared state)

I've been working with node for quite some time but recently got the opportunity to try scaling my node apps and have been researching on the same topic for some time now and have come across following pre-requisites for scaling:
My app needs to be available on a distributed system each running multiple instances of node
Each system should have a load balancer that helps distribute traffic across the node instances.
There should be a master load balancer that should distribute traffic across the node instances on distributed systems.
The master balancer should always be running OR should have a dependable restart mechanism to keep the app stable.
For the above requisites I've come across the following:
Use modules like cluster to start multiple instances of node in a system.
Use nginx always. It's one of the most simplest mechanism for creating a load balancer i've came across so far
Use HAProxy to act as a master load balancer. A few pointers on how to use it and keep it forever running.
Useful resources:
Horizontal scaling node.js and websockets.
Using cluster to take advantages of multiple cores.
I'll keep updating this answer as I progress.

The basic way to use multiple machines is to put them behind a load balancer, and point all your traffic to the load balancer. That way, someone going to http://my_domain.com, and it will point at the load balancer machine. The sole purpose (for this example anyways; in theory more could be done) of the load balancer is to delegate the traffic to a given machine running your application. This means that you can have x number of machines running your application, however an external machine (in this case a browser) can go to the load balancer address and get to one of them. The client doesn't (and doesn't have to) know what machine is actually handling its request. If you are using AWS, it's pretty easy to set up and manage this. Note that Pascal's answer has more detail about your options here.
With Node specifically, you may want to look at the Node Cluster module. I don't really have alot of experience with this module, however it should allow you to spawn multiple process of your application on one machine all sharing the same port. Also node that it's still experimental and I'm not sure how reliably it will be.

I'd recommend to take a look to http://senecajs.org, a microservices toolkit for Node.js. That is a good start point for beginners and to start thinking in "services" instead of monolitic applications.
Having said that, building distributed applcations is hard, take time to learn, take LOT of time to master it, and usually you will face a lot trade-off between performance, reliability, manteinance, etc.

Related

Faye clustering multiple nodes NodeJS

I am trying to make a pub/sub infra using faye (nodejs). I wish to know whether horizontal scaling would be possible or not.
One nodejs process will run on single core, so when people are talking about clustering, they talk about creating multiple processes on the same machine, sharing a port, and sharing data through redis.
Like this:
http://www.davidado.com/2013/12/18/using-node-js-cluster-with-socket-io-for-push-notifications/
Firstly, I don't understand how we make sure that each of the forked processes goes to a different core. If I fork 10 node servers on a machine with 4 cores, is it taken care that they are equally distributed?
What if I wish to add is a new machine, and thus scale it. I have not seen any such support anywhere. I am not sure if it is even possible to do it.
Let's say somehow multiple nodes are being used and there is some load balancer. But one client will connect to only one server process. So when a client C1 publishes on a channel on which a client C2 has subscribed, and C1 is connected to process P1 and C2 is connected to process P2, how will P1 publish the message to C2 when it doesn't have the connection?
This would probably be possible in case of a single machine, because the cluster module enables all processes to share the same port and the connections too.
I am fairly new to the web world, as well as nodejs and faye. Please enlighten me if there is something wrong in the question.
You are correct in thinking that the cluster module allows multiple cores to be used on a single machine. The cluster module allows the same application to be spawned multiple times whilst listening to the same port. The distribution amongst the cores is down to the operating system, so if you have 10 processes and 4 cores then the OS will figure out how best to distribute them (as long as they haven't been spawned with a set affinity). By default this shouldn't be a concern for you.
Load-balancing can be done through node too but that is separate from clustering. Instead you would have a separate application that would grab the load statistics on each running server and proxy the http request to the most appropriate server (using http-proxy as an example). A very primitive load balancer will send one request to each running server instance incrementally to give an even distribution.
The final point about sharing messages between all the instances assumes that there is a single point where all the messages are held. In the article you linked to they assume that there is only one server and all the processes share access to the redis instance. As they all access the same redis instance, all processes will be able to receive the same messages. If we're going to start thinking about multiple servers that are in different locations in the world that all have different message stores (i.e. their own redis instances) then we get into the domain of 'replication'. Some data stores are built with this in mind and redis is one of them. You end up with a 'master' set of data and a set of 'slaves' that will periodically update with the master and grab anything they are missing. It is important to note here that messages will not be sent in 'real-time' here unless you have a very intensive replication process.
In conclusion, developers go through this chain of scaling for their applications. The first is to make the application multi-process (the cluster module). The second is to have a load balancer that proxies the http request to the appropriate server that is running the multi-process application. The third is to replicate the datastores so that the servers can run independently but keep in sync with each other.

Load Balancing in Nodejs

I recently started with node and I have been reading a lot about its limitation of it being single threaded and how it does not utilise your cores and then I read this
http://bit.ly/1n2YW68 (which talk about the new cluster module of nodejs for loadbalancing)
Now I'm not sure I completely agree to it :) because the first thing that I thought of before starting with node on how to make it utilise cores with proper load balancing is via web-server some like upstream module like nginx
like doing something like this
upstream domain1 {
server http://nodeapp1;
server http://nodeapp2;
server http://nodeapp3;
}
So my question is there an advantage to use such cluster module for load balancing to utilise the cores does it has any significant advantage over web server load balancing
or is blog post too far from real use.
Note: I'm ain't concerned about load balancing handle by various app server like passenger(passenger has nodejs support as well but something that I'm not looking for answer :)) which I already know since I'm mostly a ruby programmer
One other option you can use to cluster NodeJs applications is to deploy the app using PM2.
Clustering is just easy as this, You don't need to implement clustering by hand
pm2 start app.js -i max
PM2 is an expert to auto detect the number of available CPUs and run as many processes as possible
Read about PM2 cluster mode here
http://pm2.keymetrics.io/docs/usage/cluster-mode/
For controlling the load of IO operations, I wrote a library called QueueP using the memoization concept. You can even customize the memoization logic and gain speedup values of more than 10, sometimes
https://www.npmjs.com/package/queuep
As far as I know, the built in node cluster is not a good solution yet (load is not evenly distributed across cores). Until v0.12: http://strongloop.com/strongblog/whats-new-in-node-js-v0-12-cluster-round-robin-load-balancing/
So you should use nginx until then. After that we will see some benchmarks comparing both options and see if the built in cluster module is a good choice.

How do I set up routing to multiple instances of a node.js server on one url?

I have a simple node.js server app built that I'm hoping to test out soon. It's single threaded and works fine without any child processing whatsoever. My problem is that the server box has multiple cores and the simplest way I can think to utilize them is by running multiple instances of the server app. However this would require them all to be on the same domain name and so some sort of request routing is required. I personally don't have much experience with servers in general and don't know if this is a task for node.js to perform or some other less complicated program (or more complicated.) If there is a node.js mechanism to solve this, for example, if one running instance can send incoming requests to the next instance, than how would I detect when this needs to happen? Transversely, if I use some other program how will it manage to detect when it needs to start talking to a new instance?
Node.js includes built-in support for managing a cluster of instances of your application to take advantage of multiple cores via the cluster module.

Are databases attached to dynos in heroku?

I want to try out heroku, but am not quite sure if I understand all terms correctly.
I have an app with node.js and redis & my main focus is scaling and speed.
In a traditional environment I would have two servers in front of a load balancer; both servers are totally independent, share the same code and have an own redis instance. Both servers don't know of each other (the data is synched by a third party server, but that is not of interest for that case).
I would then push a load balancer in front of them. Know I could easily scale, as both instances are not aware of each other and I could just add more instances if I wish.
Can I mirror that environment in a dyno or can't I attach a redis instance to a dyno?
If something is unclear, please ask, as I'm new to paas!
As I understand it: I would have a dyno for my node-app and would just add another instance of it. That's cool, but would they share the same redis or can I make them independent?
You better forget traditional architectures and try to think it this way:
A dyno is a process processing HTTP requests, the absolute minimum of an app instance on heroku.
For one application instance you can have as many dynos you want and
it is totally transparent . No need to think about servers, load
balancing, etc... everything is taken care.
A redis instance is a basically a service used by the application
instance and therefore by one or more dynos. Again, servers, load
balancing, etc all is taken care.
Maybe you want to review the How it works on heroku.com now again.
You can have as many dynos for one URL as you want - you just change the value in the controller. This is actually one of the best features of Heroku - you don't care about servers, you increase the number of dynos and by this increase the number of requests which can be processed simultaneously.
Same thing with redis - it basically doesn't work that you add instances, you just switch to a more performant plan, see https://addons.heroku.com/redistogo. Again, forget about servers.

run multiple instances of node.js in parallel

I was thinking about using a reverse proxy to distribute API requests to multiple node.js instances of a REST API. Like this it should be possible to achieve much better overall performance since multiprocessor systems can perfectly run multiple instances on one core each (or similar).
What are common solutions for such a distribution of requests onto multiple node instances and what are important points to take in mind?
First and foremost, you can use the cluster module for running many instances of the same server application. It's important to remember to correctly handle shared state, such as storing sessions in a common database.
This works standalone and you can let your users connect directly to that server, or use e.g. nginx, HAProxy, Varnish or lighttpd in front of your server.

Resources