Does my proxied server need to use HTTPS protocol with docker linking? - security

I am running several docker containers for a very small web app: nginx, node, and redis. These containers are all linked together using the legacy methods (not a network) with the pattern
nginx --proxies-> node --uses-> redis
My nginx proxy is set up to use HTTPS but my node server (using hapi.js) is not. Is this a security issue?

It isn't security issue if you aren't sending your data from nginx to node using public networks. If your HTTP traffic will be transfered inside one host machine and this host machine is fully controlling by you it will be unreachable for external access.

Related

How to force NodeJs webapp in docker container to use nginx server_name rather than host:port

I have deployed a Nodejs Express application in a docker container, lets call it node_container.
I'm using Nginx server blocks to assign different domain names to each container rather than use the main domain setup to the host's IP. let's call this maindomain.com.
without the Nginx virtual host setup, my nodejs application would have been accessed using maindomain.com:4000 (nodejs listens on 4000). With Nginx reverse proxy, node_container is mapped to nodedomain.com.
maindomain.com:4000 -> nodedomain.com
when I visit nodedomain.com I can see my node application.
However, if I click on any link on the app, say a button in my source code with href="/signin",
...
In the deployed web page, it appears as,
...
and it goes to maindomain.com:4000/signin, not nodedomain.com/signin.
Is there a way for me to specify the hostname/domain name that should be the base path for my application so that links in the application will build as
<custom_hostname>/route
ex: nodedomain.com/signin ?
TIA!
EDIT
I'm on a DigitalOcean droplet with ubuntu. I have several domain names pointed towards My Ubuntu host's IP. One of them is maindomain.com.Another is nodedomain.com. My node app is listening on port 4000 inside it's container which is mapped to port 4000 of the host. This means it is accessible by maindomain.com:4000.
I have a Nginx reverse proxy with server blocks listening on port 80, sending traffic to maindomain.com and nodedomain.com based on the server_name. the container mapped to maindomain.com, the container with node app and the container with Nginx are on a docker network.
From where does Nodejs pick the host name when it builds the absolute links from relative links? can I specify it to be nodedomain.com instead of what is automatically picked, maindomain.com:4000 ?

Does only my web server proxy need to support HTTP 2/3

I run an ExpressJS website in a docker container forwarded to a localhost port. I use NGINX to proxy and push it to the internet with caching, SSL, and all of the normal things.
I am wondering how I need to implement HTTP 2 and 3. Similar to SSL, do I only need to use it on my proxy server (NGINX), or does the whole chain need to support it?

Do I need a different server to run node.js

sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.

Nginx and Node.js - Utilizing server to fullest

For experimental/learning purposes (lets assume my application has lot of persistent/concurrent traffic), I have a VM running docker. For docker, I have following setup:
Everything has its own container and communicates with ports. I am trying to simulate two different servers (Nginx), load-balanced by HAProxy.
Now it works all fine and well, but as far as I think, Node is still running in just single thread.
The only configuration Nginx contains is for being reverse proxy to Node (everything else is default). Each Nginx server handles only one domain per server (Node).
Should I use Node Cluster for multi-threaded approach?
Or (assuming each server has 2 cores) should I create two node instances for each Nginx server and have it load balance? In this approach, I am unsure about how load-balancing would work. If there are two Node instances, load balanced by Nginx (or HAProxy), then it would look something like:
Now the reason I want Nginx is for static caching and stuff like DDOS protection. Does that really make sense? Or should I just have one Nginx load-balance between all four Node servers without HAProxy (the reason I am bringing in HAProxy is because some research showed it to be faster/more-reliable than Nginx (unconfirmed)).
Still new to this. Basically, I want to simulate two servers with two cores each running Node.js, reverse proxied by Nginx for static caching etc., and load-balanced by HAProxy.
Found the answer (a while ago, but just updating this post in case it helps someone).
Nginx can run multiple worker processes. So we can just use multiple virtual server blocks to implement this. The current approach I followed with Docker/Nginx/Node is:
Nginx server block 1: This listens to all requests on port 81. And this forwards all those requests to a node instance (lets call it node1).
Nginx server block 2: This listens to all requests on port 82. And forwards all of them to another node instance (lets call it node2).
In simple words, one server block communicates with node1 and another with node2, where node1 and node2 are two node instances (separate docker containers).
These are then load balanced by HAProxy. Where the server configuration is (in accordance with docker) as follows:
server n1 nginx:81
server n2 nginx:82
nginx is the container name. Nginx currently runs 2 worker processes.
(Add whatever http/tcp checks required, this is just minimal configuration).
Open to suggestions for a better approach.

NodeJS http module vs Nginx Server

I have read that proxies can be created by Nginx server for nodejs application to listen on but I am doubtful as to what exactly this will serve additional purpose and advantages compared to http module provide by nodejs for listening purpose.
For one, you can serve multiple Node applications on one server, with host based virtual servers managed by nginx, so that requests to the same port but with different Host: HTTP header reach different Node applications.
Also nginx can be set up to serve static assets without hitting your Node app and do some caching if you need it.
Those are two things that you can achieve with adding nginx to the mix but you may not need that in your case. Also, you can run a reverse proxy with Node and without nginx if that's what you prefer.

Resources