Node.js multiple domain web hosting - node.js

I have a VPS server (ubuntu)
wanted multiple node.js sites to be running on it, thus multiple domains
Was trying out
kubernetes with ha and docker images(containers) per website /But memory consumption would increase and the deployment is complex.
What i Need
I don't care if the database instance is shared
each website can have a own database in the database instance.
Node.js must run in the background, has got some env variables.
Simplest routing based on domain names to node.js port like 3000, 4000, 5000 and so on ..

I would advise using NGINX as described here, Setting up an Nginx Reverse Proxy

Related

How to deploy different MERN apps to digital ocean on a single droplet?

I've always used heroku to deploy my MERN apps. For the mongo db I use MongoDB Atlas, but in my job they want to migrate all the projects to DigitalOcean. I have several questions regarding this:
Can I have mongoDB + nodejs backend + react app on a single
droplet?
Can I deploy two or more apps in a single droplet? (The
apps have different domains)
Is there a video tutorial about this
(I've read lots of documentations and got many errors while trying
to do it. My eyes hurt 🙃)
For example if I have in Heroku two apps for the same website, one app for the nodejs backend and another one for the react frontend... can I do the same on DigitalOcean?
Thanks in advance!
Yeah, you can deploy multiple services in a single server, they just need to be listening on different ports.
For example, let's consider that a MongoDB server is running on port 27017, a Node.js http server is running on port 5000, and a React app is running on port 8000.
Say, your server's IP is 13.13.13.13.
Then you can access your MongoDB server, Node.js http server, and React app using 13.13.13.13:27017, 13.13.13.13:5000, and 13.13.13.13:8000, respectively, from anywhere in the Internet where your IP isn't blocked.
Now, in your server, you set up iptables to forward all incoming connections from port 8000 to 80. Now, you can access your React app by visiting 13.13.13.13, no need to use the port anymore.
Now, let's say, you add DNS records for example.com and api.example.com to point to your IP. And since you can't have A records or CNAME records pointing to a port, both of your domains will direct you to your React app. You'll have to explicitly specify the port number along with your domain if you want to access your Node.js backend, like http://example.com:5000, or http://api.example.com:5000.
If you don't want to access your backend using the port number, you can make use of Nginx as a reverse proxy. You can set up Nginx to route all the traffic to api.example.com to your backend server listening on localhost:5000.

Do I need a different server to run node.js

sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.

Nginx and Node.js - Utilizing server to fullest

For experimental/learning purposes (lets assume my application has lot of persistent/concurrent traffic), I have a VM running docker. For docker, I have following setup:
Everything has its own container and communicates with ports. I am trying to simulate two different servers (Nginx), load-balanced by HAProxy.
Now it works all fine and well, but as far as I think, Node is still running in just single thread.
The only configuration Nginx contains is for being reverse proxy to Node (everything else is default). Each Nginx server handles only one domain per server (Node).
Should I use Node Cluster for multi-threaded approach?
Or (assuming each server has 2 cores) should I create two node instances for each Nginx server and have it load balance? In this approach, I am unsure about how load-balancing would work. If there are two Node instances, load balanced by Nginx (or HAProxy), then it would look something like:
Now the reason I want Nginx is for static caching and stuff like DDOS protection. Does that really make sense? Or should I just have one Nginx load-balance between all four Node servers without HAProxy (the reason I am bringing in HAProxy is because some research showed it to be faster/more-reliable than Nginx (unconfirmed)).
Still new to this. Basically, I want to simulate two servers with two cores each running Node.js, reverse proxied by Nginx for static caching etc., and load-balanced by HAProxy.
Found the answer (a while ago, but just updating this post in case it helps someone).
Nginx can run multiple worker processes. So we can just use multiple virtual server blocks to implement this. The current approach I followed with Docker/Nginx/Node is:
Nginx server block 1: This listens to all requests on port 81. And this forwards all those requests to a node instance (lets call it node1).
Nginx server block 2: This listens to all requests on port 82. And forwards all of them to another node instance (lets call it node2).
In simple words, one server block communicates with node1 and another with node2, where node1 and node2 are two node instances (separate docker containers).
These are then load balanced by HAProxy. Where the server configuration is (in accordance with docker) as follows:
server n1 nginx:81
server n2 nginx:82
nginx is the container name. Nginx currently runs 2 worker processes.
(Add whatever http/tcp checks required, this is just minimal configuration).
Open to suggestions for a better approach.

How to open a port for http traffic on ec2 from node app?

So I have an ec2 instance running a node app on port 3000, very typical setup. However I now need to run additional apps on this server, which currently are running on their own servers, also on port 3000. So I need to migrate them all to one server, and presumably run them on different ports.
So if I want to run node apps and have them on 3000, 3010, 3020, etc, how do I do this the right way?
You need to authorize inbound traffic to your ec2 instance via AWS Console, or API. Here is a good description how to do that :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html
Since authorizing is normally a one off, probably better to do it through the AWS Console, however, if one of your requirements is to spin up node apps in different ports in an automated fashion, then you'll probably want to look at this:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#authorizeSecurityGroupIngress-property

clustering and load balancing node js app which uses mongodb

I have a nodejs app which uses mongodb for persistance. I am going to deploy it on two vms and I am going to use nginx to configure the loadbalancing setup. Now, how the mongodb should be installed? I understand that I should install it in any one of the vm. That can refer it just as localhost. How the mongodb should be made available to load balancing server(The other server)?
Based on what I see as your desired setup: nginx load balancer, two app instances, and mongodb, I would highly recommend four servers.
Server 1: nginx load balancer. The main entry point for your cluster. This is the server your public domain points to.
Servers 2, 3: Node.js application instances. nginx is configured to load balance between these two servers. As your application grows, you can continue to add nodes at this layer to keep up with demand.
Server 4: mongodb. Your Node.js instances can all be configured to point to this mongo instance. At a minimum you should probably have a fifth server for a mongo secondary, but that's up to you.

Resources