I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.
Related
I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.
I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?
You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations
Not To Sure What You Mean There? Could You Provide More Detail?
I have a linux machine assigned to me on AWS cloud and I am running my SpringBoot application in a docker container inside this linux machine on AWS cloud. To hit this application's graphQL endpoint from my windows laptop what host name or URL should I use?
and how can I frame it?
In general if this application is running in my local, I will use something like http://localhost:8080/graphQL.
The DockerFile for this application has this command -> EXPOSE 8080.
I am confused because there is a linux machine ip address and also container ip address and I don't know which one to use, so I tried both.
On that linux machine I typed 'ip address' on its terminal and it is throwing me bunch of information and I am not sure which one is my ip address.
To get ip address of container I used below command and it returned me some address. I used it https://172.17.5.3:8080/graphql from my windows laptop but it is not returning the response.
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}
{{end}}' container_name_or_id
Please let me know for any additional information.
The DockerFile for this application has this command -> EXPOSE 8080.
The EXPOSE directive doesn't really do anything (in this context). It's informative: it tells you that the container will run a service on port 8080. This will not, by default, be available from outside of your Docker host.
You can expose this port on your Linux machine by "publishing" it when you start the container. You can do this using the --publish (-p) option to docker run. For example, if you were to start your container like:
docker run -p 8080:8080 ...
Then you would be be able to access the service on port 8080 of your Linux machine's ip address or hostname, assuming that there aren't firewall rules in place that prevent the connection.
You can read more about Docker port publishing (and networking in general) in this document.
On that linux machine I typed 'ip address' on its terminal and it is throwing me bunch of information and I am not sure which one is my ip address.
You would generally use your instance's public ip address. This document has information on working with public ip addresses in AWS.
I am trying to restrict access to my express/nodejs app so that it can only be accessed from it's domain url. Currently if I go to http://ip-address-of-server:3000, the app gets served directly, bypassing nginx.
I have tried adding 'localhost' in the app.listen --
app.listen(4000, 'localhost' , () => console.log('Server running'));
but this ends up making the app completely inaccessible. Even through nginx.
The app is running inside a docker container. And nginx is running on the host. I think this might be causing it but don't know how to fix this.
You can use Docker host mode networking for your app since you mentioned that "The app is running inside a docker container. And nginx is running on the host.".
Ref - https://docs.docker.com/network/host/
This way, it will be reachable via nginx on the same network. Your "localhost" settings will start working as usual after launching the container in docker host mode networking.
Looks like you want to do IP based filtering i.e. you want the node.js program to be accessible only from nginx (localhost or remote) and locally.
There are two ways
Use express-ipfilter middleware. https://www.npmjs.com/package/express-ipfilter to filter requests based on ips
Let node.js listen to everything but change the iptables on the host to restrict port access to specific ips. Expose the port of the node.js container to the host using -p and close the iptable for that port to the outside world
I prefer the second way as it is more robust and restricts traffic at the network level
Foreword
My rails app cares about the hostname. So for example when the request comes from domain-a.dev it behaves differently than when the request comes from domain-b.dev. I want to test this behaviour and therefore have routed the complete *.dev TLD to 127.0.0.1 on my local machine, so I can set the domain of the server in my tests to what I want but it always uses my local machine tests server.
This is necessary because my tests use selenium, which launches an external browser and browses to domain-a.dev or domain-b.dev. So I cannot simply overwrite request.hostname (or so) in my tests, because this has no effect on the external browser.
Now I want to use a docker image for my tests, so I do not have to configure the test environment on multiple servers but simply start the docker image. Everything works so far but the *.dev resolving.
AFAIK Docker uses the hosts nameserver or google nameserver by default (https://docs.docker.com/articles/networking/#dns), but that would mean to change the host's dns to accomplish my goal which I don't want.
I want to build a docker image, where a special TLD, for example dev, always routes to 127.0.0.1, without touching the docker host.
This means that for everybody running this docker image, for example domain.dev will be resolved to 127.0.0.1 inside the container. (Not only domain.dev, but every *.dev domain.). Other TLDs should work as usual.
An idea I have is to start dnsmasq inside the container, configured to resolve *.dev to 127.0.0.1 and forward the rest to the usual nameserver. But I am new to docker and have no idea if this is too complicated or how this could be accomplished.
Another idea might be to overwrite /etc/hosts in the container with fixed entries for special domains. But this would mean I have to update the docker container in case I want to resolve new domains to 127.0.0.1, which is a drawback if the domains change often.
What do the docker experts say?
If you are using docker run to start your container, you have the --add-host="" argument which takes a hostname and an IP that get written to the container's /etc/hosts.
Your startup command would look like this:
docker run -d --add-host="domain-a.dev:192.168.0.10" [...]
Replace 192.168.0.10 with your computer's local IP address.
Don't use 127.0.0.1 as Docker will resolve that to the container not your computer.
I basically worked around this now by using http://xip.io/
By using urls like sub.127.0.0.1.xip.io I can connect to my local machine. My app only has to know that 127.0.0.1.xip.io is treated as the "top level domain", and sub is the domain name without tld. (In a Ruby on Rails app this can be done by adjusting config.action_dispatch.tld_length = 6 for example).