Express app inside docker. Restricting access to host localhost only - node.js

I am trying to restrict access to my express/nodejs app so that it can only be accessed from it's domain url. Currently if I go to http://ip-address-of-server:3000, the app gets served directly, bypassing nginx.
I have tried adding 'localhost' in the app.listen --
app.listen(4000, 'localhost' , () => console.log('Server running'));
but this ends up making the app completely inaccessible. Even through nginx.
The app is running inside a docker container. And nginx is running on the host. I think this might be causing it but don't know how to fix this.

You can use Docker host mode networking for your app since you mentioned that "The app is running inside a docker container. And nginx is running on the host.".
Ref - https://docs.docker.com/network/host/
This way, it will be reachable via nginx on the same network. Your "localhost" settings will start working as usual after launching the container in docker host mode networking.

Looks like you want to do IP based filtering i.e. you want the node.js program to be accessible only from nginx (localhost or remote) and locally.
There are two ways
Use express-ipfilter middleware. https://www.npmjs.com/package/express-ipfilter to filter requests based on ips
Let node.js listen to everything but change the iptables on the host to restrict port access to specific ips. Expose the port of the node.js container to the host using -p and close the iptable for that port to the outside world
I prefer the second way as it is more robust and restricts traffic at the network level

Related

Google Cloud Firewall Exposing Port Docker

I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.

How to force NodeJs webapp in docker container to use nginx server_name rather than host:port

I have deployed a Nodejs Express application in a docker container, lets call it node_container.
I'm using Nginx server blocks to assign different domain names to each container rather than use the main domain setup to the host's IP. let's call this maindomain.com.
without the Nginx virtual host setup, my nodejs application would have been accessed using maindomain.com:4000 (nodejs listens on 4000). With Nginx reverse proxy, node_container is mapped to nodedomain.com.
maindomain.com:4000 -> nodedomain.com
when I visit nodedomain.com I can see my node application.
However, if I click on any link on the app, say a button in my source code with href="/signin",
...
In the deployed web page, it appears as,
...
and it goes to maindomain.com:4000/signin, not nodedomain.com/signin.
Is there a way for me to specify the hostname/domain name that should be the base path for my application so that links in the application will build as
<custom_hostname>/route
ex: nodedomain.com/signin ?
TIA!
EDIT
I'm on a DigitalOcean droplet with ubuntu. I have several domain names pointed towards My Ubuntu host's IP. One of them is maindomain.com.Another is nodedomain.com. My node app is listening on port 4000 inside it's container which is mapped to port 4000 of the host. This means it is accessible by maindomain.com:4000.
I have a Nginx reverse proxy with server blocks listening on port 80, sending traffic to maindomain.com and nodedomain.com based on the server_name. the container mapped to maindomain.com, the container with node app and the container with Nginx are on a docker network.
From where does Nodejs pick the host name when it builds the absolute links from relative links? can I specify it to be nodedomain.com instead of what is automatically picked, maindomain.com:4000 ?

Is there a way to "host" an existing web service on port X as a network path of another web service on port 80?

What I'm trying to do is create an access website for my own services that run on my linux server at home.
The services I'm using are accessible through <my_domain>:<respective_port_num>.
For example there's a plex instance which is listening on port X and transmission-remote (a torrenting client) listening on port Y and another custom processing service on port Z
I've created a simple website using python flask which I can access remotely which redirects paths to ports (so <my_domain>/plex turns into <my_domain>:X), is there a way to display these services on the network paths I've assigned to them so I don't need to open ports for each service? I want to be able to channel an existing service on :X to <my_domain>/plex without having to modify it, I'm sure it's possible.
I have a bit of a hard time to understand your question.
You certainly can use e.g. nginx as a reverse proxy in front of your web application, listen to any port and then redirect it to the upstream application on any port - e.g. your Flask application.
Let's say, my domain is example.com.
I then can configure e.g. nginx to listen on port 80 (and 443 for SSL), and then proxy all requests to e.g. port 8000, where Flask is running locally.
Yes, this is called using nginx as a reverse proxy. It is well documented on the internet and even the official docs. Your nginx.conf would have something like:
location /my/flask/app/ {
# Assuming your flask app is at localhost:8000
proxy_pass http://localhost:8000;
}
From user's perspective, they will be connecting to your.nginx.server.com/my/flask/app/. But behind the scenes nginx will actually forward the request to your app, and serve its response back to the user.
You can deploy nginx as a Docker container, I recommend doing this as it will keep the local files and configs separate from your own work and make it easier for you to fiddle with it as you learn. Keep in mind that nginx is only HTTP though. You can't use it to proxy things like SSH or arbitrary protocols (not without a lot of hassle anyway). If the services generate their own URLs, you might also need to configure them to anticipate the nginx redirects.
BTW, usually flask is not served directly to the internet, but instead nginx talks to something like Gunicorn to handle various network related concerns: https://vsupalov.com/what-is-gunicorn/

Expose node js app with host as localhost on kubernetes

I have spent whole day looking for an answer why my node js app is not reachable from the internet when I run it on Kubernetes with loadbalancer as a service type. The solution was to change host from localhost to 0.0.0.0 in node js server app definition, but I still don’t know why?
I hope this saves one’s time in the future.
When you say that you set the host I guess you mean what the app is told to listen on in the listen function like express app server . listen all intefaces instead of localhost only You can bind to a particular address or leave it open to all by using the 0.0.0.0 mask (the zeros function as a mask for matching rather than a true IP address - https://stackoverflow.com/a/20778887/9705485).
I imagine you had your app running fine locally and were able to access it from your host machine with your localhost configuration. This would be because your local machine would be accessing it in a way that conforms to the mask. Your config was effectively saying only localhost can access this app. So when you ported to Kubernetes then your machine has to go over a network and the incoming connection was then your machine's external/public IP address, which then did not match the localhost mask. So to get it to work you changed the mask to make it open to all IPs.

How to remove port from url for node application using nginx

My react and angular application (UI have two parts) are running using
node/express application on port 3000. On server.js (node-express entry point) I dynamically
handle which UI to render (react or angular at a time) on a browser using
express-static feature.
Earlier my application is running on - https://mywebsite.com:3000/ but
as per requirement it should be changed to - https://mywebsite.com which we
handled using "nginx proxy" with DevOps person but now encountered
another issue actually now UI is accessible by using both URL that is
https://mywebsite.com:3000/ and https://mywebsite.com. I want it should be
accessible by using https://mywebsite.com/ only without port.
My server's API's (https://mywebsite.com:3000/api/v1 ) is accessible from three places: -
1) iOS app
2) Admin app (running differently)
3) and UI ( React.js + Angular.js) (https://mywebsite.com)
Note- Is there any way to handle this either through the deployment process or setting at node/express server level. We found one solution that is to create a separate node server for the UI part but as per cost-cutting, we ignored this approach that is creating another server for UI.
Suppose my domain name is https://api.aegisapi.com:3000, first, you can check in your inbound, means HTTPS is present or not if not present then add HTTPS for 443,
then you can run https://api.aegisapi.com. It works.
When you run your node server like this
it listens on all interfaces i.e. 0.0.0.0 which is accessible from outside.
You should your listen to
from
app.listen(3000)
to
app.listen(3000, '127.0.0.1');
Also, you should also block this port with IPTables as well.
if using AWS EC2 instance then open inbound port 3000 and open public facing port e.g. 80 or 443 to get request from users, then as you are using nginx as reverse proxy use that to forward request to port 3000.

Resources