I have a nodejs app which uses mongodb for persistance. I am going to deploy it on two vms and I am going to use nginx to configure the loadbalancing setup. Now, how the mongodb should be installed? I understand that I should install it in any one of the vm. That can refer it just as localhost. How the mongodb should be made available to load balancing server(The other server)?
Based on what I see as your desired setup: nginx load balancer, two app instances, and mongodb, I would highly recommend four servers.
Server 1: nginx load balancer. The main entry point for your cluster. This is the server your public domain points to.
Servers 2, 3: Node.js application instances. nginx is configured to load balance between these two servers. As your application grows, you can continue to add nodes at this layer to keep up with demand.
Server 4: mongodb. Your Node.js instances can all be configured to point to this mongo instance. At a minimum you should probably have a fifth server for a mongo secondary, but that's up to you.
Related
I have a VPS server (ubuntu)
wanted multiple node.js sites to be running on it, thus multiple domains
Was trying out
kubernetes with ha and docker images(containers) per website /But memory consumption would increase and the deployment is complex.
What i Need
I don't care if the database instance is shared
each website can have a own database in the database instance.
Node.js must run in the background, has got some env variables.
Simplest routing based on domain names to node.js port like 3000, 4000, 5000 and so on ..
I would advise using NGINX as described here, Setting up an Nginx Reverse Proxy
sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.
For experimental/learning purposes (lets assume my application has lot of persistent/concurrent traffic), I have a VM running docker. For docker, I have following setup:
Everything has its own container and communicates with ports. I am trying to simulate two different servers (Nginx), load-balanced by HAProxy.
Now it works all fine and well, but as far as I think, Node is still running in just single thread.
The only configuration Nginx contains is for being reverse proxy to Node (everything else is default). Each Nginx server handles only one domain per server (Node).
Should I use Node Cluster for multi-threaded approach?
Or (assuming each server has 2 cores) should I create two node instances for each Nginx server and have it load balance? In this approach, I am unsure about how load-balancing would work. If there are two Node instances, load balanced by Nginx (or HAProxy), then it would look something like:
Now the reason I want Nginx is for static caching and stuff like DDOS protection. Does that really make sense? Or should I just have one Nginx load-balance between all four Node servers without HAProxy (the reason I am bringing in HAProxy is because some research showed it to be faster/more-reliable than Nginx (unconfirmed)).
Still new to this. Basically, I want to simulate two servers with two cores each running Node.js, reverse proxied by Nginx for static caching etc., and load-balanced by HAProxy.
Found the answer (a while ago, but just updating this post in case it helps someone).
Nginx can run multiple worker processes. So we can just use multiple virtual server blocks to implement this. The current approach I followed with Docker/Nginx/Node is:
Nginx server block 1: This listens to all requests on port 81. And this forwards all those requests to a node instance (lets call it node1).
Nginx server block 2: This listens to all requests on port 82. And forwards all of them to another node instance (lets call it node2).
In simple words, one server block communicates with node1 and another with node2, where node1 and node2 are two node instances (separate docker containers).
These are then load balanced by HAProxy. Where the server configuration is (in accordance with docker) as follows:
server n1 nginx:81
server n2 nginx:82
nginx is the container name. Nginx currently runs 2 worker processes.
(Add whatever http/tcp checks required, this is just minimal configuration).
Open to suggestions for a better approach.
I have a mean.io express/node js web application deployed on a Linode stack.
I have my 2 application servers running Ubuntu 14.04, which are accessed behind 2 Haproxy load balancers again running on Ubuntu 14.04.
Let us call Application server 1 => APP1 and Application server 2 => APP2
Currently, I deploy manually by
Removing APP1 entry from haproxy.cfg of both the load balancers and restarting.
Update the code on APP1
Remove APP2 entry from haproxy.cfg of both the load balancers and put APP1 entry back
Restart APP1
Update code on APP2
Put the APP2 entry back in both haproxy.cfg's and restart
Restart APP2
I follow this process so that at any point of time the users of our web application get consistent data even during deployment, i.e both the instances of app server are not running a different copy of the code.
I am moving to automated deployment system and the 2 options I have looked at for deployment are Capistrano and Shipit JS.
They both provide ways to mention multiple servers in their configuration, for e.g in capistrano
role :app, "APP1", "APP2"
and in Shipit JS
shipit.config.servers = ['APP1', 'APP2']
So, my question is how do these libraries make sure that both the servers are updated parallely before they are restarted? Is there a way by which they lock incoming requests to these servers during updation ?
Do these deployment systems work only for simple Client - App Server architecture or they can used in systems where there is a load balancer?
Any explanation would be invaluable. I have tried my best to explain the situation here.If you need more input please mention below in the comments.
I have a standard LAMP EC2 instance set-up running on Amazon's AWS. Having also installed Node.js, socket.io and Express to meet the demands of live updating, I am now at the stage of load balancing the application. That's all working, but my sockets aren't. This is how my set-up looks:-
--- EC2 >> Node.js + socket.io
/
Client >> ELB --
\
--- EC2 >> Node.js + socket.io
[RDS MySQL - EC2 instances communicate to this]
As you can see, each instance has an installation of Node and socket.io. However, occasionally Chrome debug will 400 the socket request returning the reason {"code":1,"message":"Session ID unknown"}, and I guess this is because it's communicating to the other instance.
Additionally, let's say I am on page A and the socket needs to emit to page B - because of the load balancer these two pages might well be on a different instance (they will both be open at the same time). Using something like Sticky Sessions, to my knowledge, wouldn't work in that scenario because both pages would be restricted to their respective instances.
How can I get around this issue? Will I need a whole dedicated instance just for Node? That seems somewhat overkill...
The issues come up when you consider both websocket traffic (layer 4 -ish) and HTTP traffic (layer 7) moving across a load balancer that can only inspect one layer at a time. For example, if you set the ELB to load balance on layer 7 (HTTP/HTTPS) then websockets will not work at all across the ELB. However, if you set the ELB to load balance on layer 4 (TCP) then any fallback HTTP polling requests could end up at any of the upstream servers.
You have two options here. You can figure out a way to effectively load balance both HTTP and websocket requests or find a way to deterministically map requests to upstream servers regardless of the protocol.
The first one is pretty involved and requires another load balancer. A good walkthrough can be found here. It's worth noting that when that post was written HAProxy didn't have native SSL support. Now that this is the case it might be possible to just remove the ELB entirely, if that's the route you want to go. If that's the case the second option might be better.
Otherwise you can use HAProxy on its own (or a paid version of Nginx) to implement a deterministic load balancing mechanism. In this case you would use IP hashing since socket.io does not provide a route-based mechanism to identify a particular server like sockjs. This would use the first 3 octets of the IP address to determine which upstream server gets each request so unless the user changes IP addresses between HTTP polls then this should work.
The solution would be for the two(or more) node.js installs to use a common session source.
Here is a previous question on using REDIS as a common session store for node.js How to share session between NodeJs and PHP using Redis?
and another
Node.js Express sessions using connect-redis with Unix Domain Sockets