I have a simple NodeJS site running inside a Docker container, with its ports mapped to port 80 on the host. I have a domain pointing to the IP of the EC2 instance, and everything is working as expected.
If I want to run another, separate NodeJS site from a Docker container on the same instance, how can I map specific domain names to specific Docker containers?
Eg, let's assume the IP of my EC2 instance is 22.33.44.55 and my domains are domain-a.com and domain-b.com. My dockerized NodeJS applications are siteA and siteB.
domain-a.com is configured to point to 22.33.44.55. siteA is listening on that IP address to port 80 - this is what I currently have.
domain-b.com is configured to point to 22.33.44.55. I want this traffic mapped to the siteB Docker container.
You could setup a nginx reverse proxy on the host and bind to seperate ports. The question/and answer on this article explain it quite nicely so I won't repeat it all:
https://www.digitalocean.com/community/questions/how-to-bind-multiple-domains-ports-80-and-443-to-docker-contained-applications
Related
I have an application written in node js which is using multiple outgoing IPs to make requests to website. Basically, I want to make requests to some website with different IPs, which I have assigned to my VPS. I want to containerize my application with docker, but docker only has access to the main IP from the host machine and I was not able to assign my other IPs to that container. I've tried many things and been stuck on that problem for days...
IPv4 address for enp1s0: xxx.xx.xxx.70
IPv4 address for enp1s0: xxx.xxx.x.145
IPv4 address for enp1s0: xxx.xxx.xxx.154
I need to be able to select one of these IPs from node js application inside docker container and me an HTTP request with it.
Also if what I said is not possible it is also acceptable to assign each IP to each container (so 3 containers in total with different outgoing IPs). But this is not ideal. I'm looking for a painless way of doing this since I want to be able to pre-configure my apps and docker-compose to automate most of this work.
I have deployed a Nodejs Express application in a docker container, lets call it node_container.
I'm using Nginx server blocks to assign different domain names to each container rather than use the main domain setup to the host's IP. let's call this maindomain.com.
without the Nginx virtual host setup, my nodejs application would have been accessed using maindomain.com:4000 (nodejs listens on 4000). With Nginx reverse proxy, node_container is mapped to nodedomain.com.
maindomain.com:4000 -> nodedomain.com
when I visit nodedomain.com I can see my node application.
However, if I click on any link on the app, say a button in my source code with href="/signin",
...
In the deployed web page, it appears as,
...
and it goes to maindomain.com:4000/signin, not nodedomain.com/signin.
Is there a way for me to specify the hostname/domain name that should be the base path for my application so that links in the application will build as
<custom_hostname>/route
ex: nodedomain.com/signin ?
TIA!
EDIT
I'm on a DigitalOcean droplet with ubuntu. I have several domain names pointed towards My Ubuntu host's IP. One of them is maindomain.com.Another is nodedomain.com. My node app is listening on port 4000 inside it's container which is mapped to port 4000 of the host. This means it is accessible by maindomain.com:4000.
I have a Nginx reverse proxy with server blocks listening on port 80, sending traffic to maindomain.com and nodedomain.com based on the server_name. the container mapped to maindomain.com, the container with node app and the container with Nginx are on a docker network.
From where does Nodejs pick the host name when it builds the absolute links from relative links? can I specify it to be nodedomain.com instead of what is automatically picked, maindomain.com:4000 ?
I am trying to restrict access to my express/nodejs app so that it can only be accessed from it's domain url. Currently if I go to http://ip-address-of-server:3000, the app gets served directly, bypassing nginx.
I have tried adding 'localhost' in the app.listen --
app.listen(4000, 'localhost' , () => console.log('Server running'));
but this ends up making the app completely inaccessible. Even through nginx.
The app is running inside a docker container. And nginx is running on the host. I think this might be causing it but don't know how to fix this.
You can use Docker host mode networking for your app since you mentioned that "The app is running inside a docker container. And nginx is running on the host.".
Ref - https://docs.docker.com/network/host/
This way, it will be reachable via nginx on the same network. Your "localhost" settings will start working as usual after launching the container in docker host mode networking.
Looks like you want to do IP based filtering i.e. you want the node.js program to be accessible only from nginx (localhost or remote) and locally.
There are two ways
Use express-ipfilter middleware. https://www.npmjs.com/package/express-ipfilter to filter requests based on ips
Let node.js listen to everything but change the iptables on the host to restrict port access to specific ips. Expose the port of the node.js container to the host using -p and close the iptable for that port to the outside world
I prefer the second way as it is more robust and restricts traffic at the network level
I want to run multiple web server on the same VM. Each web server is dockerized.
Is it possible to run multiple dockerized web servers on port 80, with different domains, using docker functionalities?
I found many solution based on NGinx proxy like here , but I don't find any user defined network usage that solves this problem.
Is there any solution to this problem without running a reverse proxy?
No.
There is only one "real" port 80 on the host server (for each network address), so you need something that listens there and forwards to the different backend servers.
This is not a docker-specific problem. You cannot run multiple (non-Docker) web servers (one for each domain) on a single port either.
That something does not have to be nginx.
As far as I am aware, no it can not work to have multiple containers listening on port 80 or the same port in general.
You could open up different ports on your VM and have the applications listening to each one of those ports specifically.
For instance you could have your first application listen to port 80. Then your second one on port 81, etc.
First of all it is possible to run multiple domains on the same port, but the requirement is that:
you host sites on the same web server (Apache HTTPD or Nginx)
you are using virtual hosts
The one server can be containerized, if needed.
Here is the example of running two domains on exactly one Nginx web server.
So depending on your requirements, that can be solution.
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.