Routing TLD in docker image to 127.0.0.1 - dns

Foreword
My rails app cares about the hostname. So for example when the request comes from domain-a.dev it behaves differently than when the request comes from domain-b.dev. I want to test this behaviour and therefore have routed the complete *.dev TLD to 127.0.0.1 on my local machine, so I can set the domain of the server in my tests to what I want but it always uses my local machine tests server.
This is necessary because my tests use selenium, which launches an external browser and browses to domain-a.dev or domain-b.dev. So I cannot simply overwrite request.hostname (or so) in my tests, because this has no effect on the external browser.
Now I want to use a docker image for my tests, so I do not have to configure the test environment on multiple servers but simply start the docker image. Everything works so far but the *.dev resolving.
AFAIK Docker uses the hosts nameserver or google nameserver by default (https://docs.docker.com/articles/networking/#dns), but that would mean to change the host's dns to accomplish my goal which I don't want.
I want to build a docker image, where a special TLD, for example dev, always routes to 127.0.0.1, without touching the docker host.
This means that for everybody running this docker image, for example domain.dev will be resolved to 127.0.0.1 inside the container. (Not only domain.dev, but every *.dev domain.). Other TLDs should work as usual.
An idea I have is to start dnsmasq inside the container, configured to resolve *.dev to 127.0.0.1 and forward the rest to the usual nameserver. But I am new to docker and have no idea if this is too complicated or how this could be accomplished.
Another idea might be to overwrite /etc/hosts in the container with fixed entries for special domains. But this would mean I have to update the docker container in case I want to resolve new domains to 127.0.0.1, which is a drawback if the domains change often.
What do the docker experts say?

If you are using docker run to start your container, you have the --add-host="" argument which takes a hostname and an IP that get written to the container's /etc/hosts.
Your startup command would look like this:
docker run -d --add-host="domain-a.dev:192.168.0.10" [...]
Replace 192.168.0.10 with your computer's local IP address.
Don't use 127.0.0.1 as Docker will resolve that to the container not your computer.

I basically worked around this now by using http://xip.io/
By using urls like sub.127.0.0.1.xip.io I can connect to my local machine. My app only has to know that 127.0.0.1.xip.io is treated as the "top level domain", and sub is the domain name without tld. (In a Ruby on Rails app this can be done by adjusting config.action_dispatch.tld_length = 6 for example).

Related

Google Cloud Firewall Exposing Port Docker

I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.

Confluence in Docker can't see PostgreSQL in Docker

I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.

linux redirect localhost port to url port

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!
It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.

Docker set container IP addr from DHCP/DNS automatically

While I found few options to force docker containers to have IP addr that match my DNS, I did not found any method to enforce a permanent IP addr automatically at "docker start".
Currently I have a small script that retrieves IP addrs from my DNS and config container's network directly from docker host. Unfortunately this method does not work with automatic start, preventing usage of tools like docker-ui.
My script is available at https://gist.github.com/fulup-bzh/8fd32c8cde9a1e5ec971 The script works fine, but I would like this to happen automatic in order to enable users to leverage tools like "docker-ui"
Question:
- is there a way to force the execution of an "autostart" script at host level during "docker start" execution ?
- alternatively is there is way to set container IP from DHCP automatically from the host ?

Resources