Docker set container IP addr from DHCP/DNS automatically - dns

While I found few options to force docker containers to have IP addr that match my DNS, I did not found any method to enforce a permanent IP addr automatically at "docker start".
Currently I have a small script that retrieves IP addrs from my DNS and config container's network directly from docker host. Unfortunately this method does not work with automatic start, preventing usage of tools like docker-ui.
My script is available at https://gist.github.com/fulup-bzh/8fd32c8cde9a1e5ec971 The script works fine, but I would like this to happen automatic in order to enable users to leverage tools like "docker-ui"
Question:
- is there a way to force the execution of an "autostart" script at host level during "docker start" execution ?
- alternatively is there is way to set container IP from DHCP automatically from the host ?

Related

Google Cloud Firewall Exposing Port Docker

I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.

getting hostname of remote computers on the local network not setup in /etc/hosts

I have a new learning, I was trying to get hostname using python's socket.
so from my macbook I ran the below code:
socket.gethostbyaddr("192.168.1.111")
and I get the ('rock64', [], ['192.168.1.111']) then I tried IP address of a computer that is not on the network anymore but used to be:
socket.gethostbyaddr("192.168.1.189")
and it returned: ('mint', [], ['192.168.1.189']) then I realised its coming from the /etc/hosts file.
now in that host file I also have this entry:
/etc/hosts
172.217.25.3 google.com.hk
but if I try to get host from ip of wan address i get different results than expected!
socket.gethostbyaddr("172.217.25.3")
that returns ('hkg07s24-in-f3.1e100.net', ['3.25.217.172.in-addr.arpa'], ['172.217.25.3'])
so I am not wondering where in the later case of WAN ip address I am getting the hostname and why in case of local computer IP's I am getting hostname from the configured /etc/hosts file ?
How can we get hostname of host computers on the local network without socket.gethostbyaddr having to look into /etc/hosts file or by other means ?
This is opinion based answer to the question "how to build registry of network devices on your local network?"
The best way to build registry of devices on your local network is to setup ntopng on your gateway. It uses DPI (Deep Packet Inspection) Technics to collect information about hosts.
NTOPNG has nice user interface and displays host names (when possible).
You can assign aliases for specific hosts which do not leak host names via any protocol.
For some reasons ntopng developers did not include alias into json response for request http://YOUR-SERVER:3000/lua/host_get_json.lua?ifid=2&host=IP-OF-DEVICE .
You can add it manually by adding lines require "mac_utils" and hj["alias"]=getDeviceName(hj["mac_address"]) into file /usr/share/ntopng/scripts/lua/host_get_json.lua
You can use REST API to interrogate ntopng and use provided information for building any script you need.

How to frame my URL to send GraphQL request to the application running inside a container on linux machine from windows machine?

I have a linux machine assigned to me on AWS cloud and I am running my SpringBoot application in a docker container inside this linux machine on AWS cloud. To hit this application's graphQL endpoint from my windows laptop what host name or URL should I use?
and how can I frame it?
In general if this application is running in my local, I will use something like http://localhost:8080/graphQL.
The DockerFile for this application has this command -> EXPOSE 8080.
I am confused because there is a linux machine ip address and also container ip address and I don't know which one to use, so I tried both.
On that linux machine I typed 'ip address' on its terminal and it is throwing me bunch of information and I am not sure which one is my ip address.
To get ip address of container I used below command and it returned me some address. I used it https://172.17.5.3:8080/graphql from my windows laptop but it is not returning the response.
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}
{{end}}' container_name_or_id
Please let me know for any additional information.
The DockerFile for this application has this command -> EXPOSE 8080.
The EXPOSE directive doesn't really do anything (in this context). It's informative: it tells you that the container will run a service on port 8080. This will not, by default, be available from outside of your Docker host.
You can expose this port on your Linux machine by "publishing" it when you start the container. You can do this using the --publish (-p) option to docker run. For example, if you were to start your container like:
docker run -p 8080:8080 ...
Then you would be be able to access the service on port 8080 of your Linux machine's ip address or hostname, assuming that there aren't firewall rules in place that prevent the connection.
You can read more about Docker port publishing (and networking in general) in this document.
On that linux machine I typed 'ip address' on its terminal and it is throwing me bunch of information and I am not sure which one is my ip address.
You would generally use your instance's public ip address. This document has information on working with public ip addresses in AWS.

Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.

Routing TLD in docker image to 127.0.0.1

Foreword
My rails app cares about the hostname. So for example when the request comes from domain-a.dev it behaves differently than when the request comes from domain-b.dev. I want to test this behaviour and therefore have routed the complete *.dev TLD to 127.0.0.1 on my local machine, so I can set the domain of the server in my tests to what I want but it always uses my local machine tests server.
This is necessary because my tests use selenium, which launches an external browser and browses to domain-a.dev or domain-b.dev. So I cannot simply overwrite request.hostname (or so) in my tests, because this has no effect on the external browser.
Now I want to use a docker image for my tests, so I do not have to configure the test environment on multiple servers but simply start the docker image. Everything works so far but the *.dev resolving.
AFAIK Docker uses the hosts nameserver or google nameserver by default (https://docs.docker.com/articles/networking/#dns), but that would mean to change the host's dns to accomplish my goal which I don't want.
I want to build a docker image, where a special TLD, for example dev, always routes to 127.0.0.1, without touching the docker host.
This means that for everybody running this docker image, for example domain.dev will be resolved to 127.0.0.1 inside the container. (Not only domain.dev, but every *.dev domain.). Other TLDs should work as usual.
An idea I have is to start dnsmasq inside the container, configured to resolve *.dev to 127.0.0.1 and forward the rest to the usual nameserver. But I am new to docker and have no idea if this is too complicated or how this could be accomplished.
Another idea might be to overwrite /etc/hosts in the container with fixed entries for special domains. But this would mean I have to update the docker container in case I want to resolve new domains to 127.0.0.1, which is a drawback if the domains change often.
What do the docker experts say?
If you are using docker run to start your container, you have the --add-host="" argument which takes a hostname and an IP that get written to the container's /etc/hosts.
Your startup command would look like this:
docker run -d --add-host="domain-a.dev:192.168.0.10" [...]
Replace 192.168.0.10 with your computer's local IP address.
Don't use 127.0.0.1 as Docker will resolve that to the container not your computer.
I basically worked around this now by using http://xip.io/
By using urls like sub.127.0.0.1.xip.io I can connect to my local machine. My app only has to know that 127.0.0.1.xip.io is treated as the "top level domain", and sub is the domain name without tld. (In a Ruby on Rails app this can be done by adjusting config.action_dispatch.tld_length = 6 for example).

Resources