Docker containers not using host DNS in boot2docker - dns

I am running boot2docker on my Mac.
OSX version 10.9.3
boot2docker version 4.3.12
Docker version 0.12.0
The boot2docker image is a vagrant box, using virtualbox. I have tried a number of vagrant boxes (for example stigkj/boot2docker). All of them exhibiting the issue.
If I ssh into the boot2docker image and look at /etc/resolv.conf it is using the nameserver 10.0.2.3.
I boot up a simple docker image with the command:
docker run -i -t ubuntu /bin/sh
Looking at /etc/resolv.conf in that container, it is using 8.8.8.8 and 8.8.4.4 as nameservers.
In the docker.log file on the boot2docker vm, there is this line:
2014/06/30 15:25:01 Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
From what I understand, docker is supposed to use the nameserver of the host. Only if the host is using 127.0.0.1 as it's nameserver should it default to the google nameservers as a backup.
The host isn't using 127.0.0.1 as a name server, but it appears that docker thinks it is. Any suggestions on how I can get it to properly detect the nameserver?

I found a fix.
It appears that the boot2docker image runs the docker daemon before it pulls the DNS from the host. So boot2docker thinks the DNS is set to 127.0.0.1 when it boots, then the machine changes it to the correct nameserver.
The fix is to restart the docker daemon after the image has booted.
In vagrant, I did this by adding the below command in the appropriate place in my Vagrantfile:
config.vm.provision :shell, inline: "/etc/init.d/docker restart"
It looks like this is a known issue in boot2docker that will be fixed in an upcoming version:
https://github.com/boot2docker/boot2docker/issues/357

credit to #oillio for the issue link and the discussion inside.
It happens in Windows 7 environment as well using boot2docker 1.0.1, I follow the suggestion in https://github.com/boot2docker/boot2docker/issues/357
$ sudo udhcpc # refresh the DHCP
$ sudo /etc/init.d/docker restart # restart the service

Related

Node using require('os').networkInterfaces() gives 172 local ip address instead of 192, but only 192 appears to work [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 10 months ago.
As the title says, I need to be able to retrieve the IP address the docker hosts and the portmaps from the host to the container, and doing that inside of the container.
/sbin/ip route|awk '/default/ { print $3 }'
As #MichaelNeale noticed, there is no sense to use this method in Dockerfile (except when we need this IP during build time only), because this IP will be hardcoded during build time.
As of version 18.03, you can use host.docker.internal as the host's IP.
Works in Docker for Mac, Docker for Windows, and perhaps other platforms as well.
This is an update from the Mac-specific docker.for.mac.localhost, available since version 17.06, and docker.for.mac.host.internal, available since version 17.12, which may also still work on that platform.
Note, as in the Mac and Windows documentation, this is for development purposes only.
For example, I have environment variables set on my host:
MONGO_SERVER=host.docker.internal
In my docker-compose.yml file, I have this:
version: '3'
services:
api:
build: ./api
volumes:
- ./api:/usr/src/app:ro
ports:
- "8000"
environment:
- MONGO_SERVER
command: /usr/local/bin/gunicorn -c /usr/src/app/gunicorn_config.py -w 1 -b :8000 wsgi
Update: On Docker for Mac, as of version 18.03, you can use host.docker.internal as the host's IP. See aljabear's answer. For prior versions of Docker for Mac the following answer may still be useful:
On Docker for Mac the docker0 bridge does not exist, so other answers here may not work. All outgoing traffic however, is routed through your parent host, so as long as you try to connect to an IP it recognizes as itself (and the docker container doesn't think is itself) you should be able to connect. For example if you run this from the parent machine run:
ipconfig getifaddr en0
This should show you the IP of your Mac on its current network and your docker container should be able to connect to this address as well. This is of course a pain if this IP address ever changes, but you can add a custom loopback IP to your Mac that the container doesn't think is itself by doing something like this on the parent machine:
sudo ifconfig lo0 alias 192.168.46.49
You can then test the connection from within the docker container with telnet. In my case I wanted to connect to a remote xdebug server:
telnet 192.168.46.49 9000
Now when traffic comes into your Mac addressed for 192.168.46.49 (and all the traffic leaving your container does go through your Mac) your Mac will assume that IP is itself. When you are finish using this IP, you can remove the loopback alias like this:
sudo ifconfig lo0 -alias 192.168.46.49
One thing to be careful about is that the docker container won't send traffic to the parent host if it thinks the traffic's destination is itself. So check the loopback interface inside the container if you have trouble:
sudo ip addr show lo
In my case, this showed inet 127.0.0.1/8 which means I couldn't use any IPs in the 127.* range. That's why I used 192.168.* in the example above. Make sure the IP you use doesn't conflict with something on your own network.
AFAIK, in the case of Docker for Linux (standard distribution), the IP address of the host will always be 172.17.0.1 (on the main network of docker, see comments to learn more).
The easiest way to get it is via ifconfig (interface docker0) from the host:
ifconfig
From inside a docker, the following command from a docker: ip -4 route show default | cut -d" " -f3
You can run it quickly in a docker with the following command line:
# 1. Run an ubuntu docker
# 2. Updates dependencies (quietly)
# 3. Install ip package (quietly)
# 4. Shows (nicely) the ip of the host
# 5. Removes the docker (thanks to `--rm` arg)
docker run -it --rm ubuntu:22.04 bash -c "apt-get update > /dev/null && apt-get install iproute2 -y > /dev/null && ip -4 route show default | cut -d' ' -f3"
For those running Docker in AWS, the instance meta-data for the host is still available from inside the container.
curl http://169.254.169.254/latest/meta-data/local-ipv4
For example:
$ docker run alpine /bin/sh -c "apk update ; apk add curl ; curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ; echo"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/community/x86_64/APKINDEX.tar.gz
v3.3.1-119-gb247c0a [http://dl-cdn.alpinelinux.org/alpine/v3.3/main]
v3.3.1-59-g48b0368 [http://dl-cdn.alpinelinux.org/alpine/v3.3/community]
OK: 5855 distinct packages available
(1/4) Installing openssl (1.0.2g-r0)
(2/4) Installing ca-certificates (20160104-r2)
(3/4) Installing libssh2 (1.6.0-r1)
(4/4) Installing curl (7.47.0-r0)
Executing busybox-1.24.1-r7.trigger
Executing ca-certificates-20160104-r2.trigger
OK: 7 MiB in 15 packages
172.31.27.238
$ ifconfig eth0 | grep -oP 'inet addr:\K\S+'
172.31.27.238
The only way is passing the host information as environment when you create a container
run --env <key>=<value>
The --add-host could be a more cleaner solution (but without the port part, only the host can be handled with this solution). So, in your docker run command, do something like:
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [my container]
(From https://stackoverflow.com/a/26864854/127400 )
The standard best practice for most apps looking to do this automatically is: you don't. Instead you have the person running the container inject an external hostname/ip address as configuration, e.g. as an environment variable or config file. Allowing the user to inject this gives you the most portable design.
Why would this be so difficult? Because containers will, by design, isolate the application from the host environment. The network is namespaced to just that container by default, and details of the host are protected from the process running inside the container which may not be fully trusted.
There are different options depending on your specific situation:
If your container is running with host networking, then you can look at the routing table on the host directly to see the default route out. From this question the following works for me e.g.:
ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'
An example showing this with host networking in a container looks like:
docker run --rm --net host busybox /bin/sh -c \
"ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"
For current versions of Docker Desktop, they injected a DNS entry into the embedded VM:
getent hosts host.docker.internal | awk '{print $1}'
With the 20.10 release, the host.docker.internal alias can also work on Linux if you run your containers with an extra option:
docker run --add-host host.docker.internal:host-gateway ...
If you are running in a cloud environment, you can check the metadata service from the cloud provider, e.g. the AWS one:
curl http://169.254.169.254/latest/meta-data/local-ipv4
If you want your external/internet address, you can query a remote service like:
curl ifconfig.co
Each of these have limitations and only work in specific scenarios. The most portable option is still to run your container with the IP address injected as a configuration, e.g. here's an option running the earlier ip command on the host and injecting it as an environment variable:
export HOST_IP=$(ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p')
docker run --rm -e HOST_IP busybox printenv HOST_IP
docker network inspect bridge -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}'
It's possible to retrieve it using docker network inspect
TLDR for Mac and Windows
docker run -it --rm alpine nslookup host.docker.internal
... prints the host's IP address ...
nslookup: can't resolve '(null)': Name does not resolve
Name: host.docker.internal
Address 1: 192.168.65.2
Details
On Mac and Windows, you can use the special DNS name host.docker.internal.
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
If you want real IP address (not a bridge IP) on Windows and you have docker 18.03 (or more recent) do the following:
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
In linux you can run
HOST_IP=`hostname -I | awk '{print $1}'`
In macOS your host machine is not the Docker host. Docker will install it's host OS in VirtualBox.
HOST_IP=`docker run busybox ping -c 1 docker.for.mac.localhost | awk 'FNR==2 {print $4}' | sed s'/.$//'`
I have Ubuntu 16.03. For me
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [image]
does NOT work (wrong ip was generating)
My working solution was that:
docker run --add-host dockerhost:`docker network inspect --format='{{range .IPAM.Config}}{{.Gateway}}{{end}}' bridge` [image]
Docker for Mac
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.
The gateway is also reachable as gateway.docker.internal.
https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
If you enabled the docker remote API (via -Htcp://0.0.0.0:4243 for instance) and know the host machine's hostname or IP address this can be done with a lot of bash.
Within my container's user's bashrc:
export hostIP=$(ip r | awk '/default/{print $3}')
export containerID=$(awk -F/ '/docker/{print $NF;exit;}' /proc/self/cgroup)
export proxyPort=$(
curl -s http://$hostIP:4243/containers/$containerID/json |
node -pe 'JSON.parse(require("fs").readFileSync("/dev/stdin").toString()).NetworkSettings.Ports["DESIRED_PORT/tcp"][0].HostPort'
)
The second line grabs the container ID from your local /proc/self/cgroup file.
Third line curls out to the host machine (assuming you're using 4243 as docker's port) then uses node to parse the returned JSON for the DESIRED_PORT.
My solution:
docker run --net=host
then in docker container:
hostname -I | awk '{print $1}'
Here is another option for those running Docker in AWS. This option avoids having using apk to add the curl package and saves the precious 7mb of space. Use the built-in wget (part of the monolithic BusyBox binary):
wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4
use hostname -I command on the terminal
Try this:
docker run --rm -i --net=host alpine ifconfig
So... if you are running your containers using a Rancher server, Rancher v1.6 (not sure if 2.0 has this) containers have access to http://rancher-metadata/ which has a lot of useful information.
From inside the container the IP address can be found here:
curl http://rancher-metadata/latest/self/host/agent_ip
For more details see:
https://rancher.com/docs/rancher/v1.6/en/rancher-services/metadata-service/
This is a minimalistic implementation in Node.js for who is running the host on AWS EC2 instances, using the afore mentioned EC2 Metadata instance
const cp = require('child_process');
const ec2 = function (callback) {
const URL = 'http://169.254.169.254/latest/meta-data/local-ipv4';
// we make it silent and timeout to 1 sec
const args = [URL, '-s', '--max-time', '1'];
const opts = {};
cp.execFile('curl', args, opts, (error, stdout) => {
if (error) return callback(new Error('ec2 ip error'));
else return callback(null, stdout);
})
.on('error', (error) => callback(new Error('ec2 ip error')));
}//ec2
and used as
ec2(function(err, ip) {
if(err) console.log(err)
else console.log(ip);
})
If you are running a Windows container on a Service Fabric cluster, the host's IP address is available via the environment variable Fabric_NodeIPOrFQDN. Service Fabric environment variables
Here is how I do it. In this case, it adds a hosts entry into /etc/hosts within the docker image pointing taurus-host to my local machine IP: :
TAURUS_HOST=`ipconfig getifaddr en0`
docker run -it --rm -e MY_ENVIRONMENT='local' --add-host "taurus-host:${TAURUS_HOST}" ...
Then, from within Docker container, script can use host name taurus-host to get out to my local machine which hosts the docker container.
Maybe the container I've created is useful as well https://github.com/qoomon/docker-host
You can simply use container name dns to access host system e.g. curl http://dockerhost:9200, so no need to hassle with any IP address.
The solution I use is based on a "server" that returns the external address of the Docker host when it receives a http request.
On the "server":
1) Start jwilder/nginx-proxy
# docker run -d -p <external server port>:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
2) Start ipify container
# docker run -e VIRTUAL_HOST=<external server name/address> --detach --name ipify osixia/ipify-api:0.1.0
Now when a container sends a http request to the server, e.g.
# curl http://<external server name/address>:<external server port>
the IP address of the Docker host is returned by ipify via http header "X-Forwarded-For"
Example (ipify server has name "ipify.example.com" and runs on port 80, docker host has IP 10.20.30.40):
# docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
# docker run -e VIRTUAL_HOST=ipify.example.com --detach --name ipify osixia/ipify-api:0.1.0
Inside the container you can now call:
# curl http://ipify.example.com
10.20.30.40
On Ubuntu, hostname command can be used with the following options:
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
For example:
$ hostname -i
172.17.0.2
To assign to the variable, the following one-liner can be used:
IP=$(hostname -i)
Another approach is based on traceroute and it's working on a Linux host for me, e.g. in a container based on Alpine:
traceroute -n 8.8.8.8 -m 4 -w 1 | awk '$1~/\d/&&$2!~/^172\./{print$2}' | head -1
It takes a moment, but lists the first hop's IP that does not start with 172. If there is no successful response, try increasing the limit on the tested hops using -m 4 argument.
With https://docs.docker.com/machine/install-machine/
a) $ docker-machine ip
b) Get the IP address of one or more machines.
$ docker-machine ip host_name
$ docker-machine ip host_name1 host_name2

No Internet Access In Docker Container When Connected to Cisco AnyConnect VPN

I am connected to a corporate VPN and need to be able to run docker containers while the VPN is connected due to the fact that the container needs to be able to access corporate endpoints. However, when I am connected with AnyConnect VPN, docker has no internet access at all. Neither to our corporate endpoints or the internet.
I am running CentOS7 as my host operating system.
A simple way to reproduce this issue is to install a minimal linux distro, install AnyConnect VPN, connect to vpn and try to run the following docker container:
docker run -i -t ubuntu:14.04 /bin/bash
Once inside the container I try to ping google dns
[###]$ ping 8.8.8.8
There will be no response. If I disconnect from AnyConnect VPN and retry the above, I get a ping response.
How can I fix this issue?
Ping outside and internet access are different. You could access internet but could not ping as limit by your corporation network. I suggest running busybox
docker run -it --rm busybox
and check the dns setup inside
cat /etc/resolv.conf
From there you may see list of nameserver ip addresses. Now you could try to ping those to make sure they are reachable from inside. If not, you could try
traceroute 1.2.3.4
to see how far you could go from inside container, the first 2 lines should be ip of docker and the host machine, and then the ip of your corporation network
1 172.17.0.1 (172.17.0.1) 0.016 ms 0.011 ms 0.009 ms
2 10.1.249.4 (10.1.249.4) 38.487 ms 35.697 ms 35.558 ms
Usually it's problem of the nameserver generated inside /etc/resolv.conf
file. If it's the case, then you need to check /etc/resolv.conf
in the host machine and update the docker setup to generate the nameservers correctly inside container.
After you make a change to the network interfaces, you often need to restart the docker engine to rebuild all of the routes and iptables entries. With Linux and systemd, use:
systemctl restart docker

Connect to host mongodb from docker container

So I want to connect to my mongodb running on my host machine (DO droplet, Ubuntu 16.04). It is running on the default 27017 port on localhost.
I then use mup to deploy my Meteor app on my DO droplet, which is using docker to run my Meteor app inside a container. So far so good.
A standard mongodb://... connection url is used to connect the app to the mongodb.
Now I have the following problem:
mongodb://...#localhost:27017... obviously does not work inside the docker container, as localhost is not the host's localhost.
I already read many stackoverflow posts on this, I already tried using:
--network="host" - did not work as it said 0.0.0.0:80 is already in use or something like that (nginx proxy)
--add-host="local:<MY-DROPLET-INTERNET-IP>" and connect via mongodb://...#local:27017...: also not working as I can access my mongodb only from localhost, not from the public IP
This has to be a common problem!
tl;dr - What is the proper way to expose the hosts localhost inside a docker container so I can connect to services running on the host? (including their ports, e.g. 27017).
I hope someone can help!
You can use: 172.17.0.1 as it is the default host ip that the containers can see. But you need to configure Mongo to listen to 0.0.0.0.
From docker 18.03 onwards the recommendation is to connect to the special DNS name host.docker.internal
For previous versions you can use DNS names docker.for.mac.localhost or docker.for.windows.localhost.
change the bindIp from 127.0.0.1 to 0.0.0.0 in /etc/mongod.conf. Then it will work
or start mongod on ubuntu with a flag to bind all ip address as a temporary workaround (dev/learning purposes)
$ mongod --bind_ip_all
Tried 100500 variants for Windows (using docker desktop), but without any result...
Unfortunately, currently, Windows (at least docker desktop) is not supporting --net=host
Quoted from: https://docs.docker.com/network/network-tutorial-host/#prerequisites
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
You can try to use https://docs.docker.com/toolbox/

Docker daemon and DNS

I am trying to force the docker daemon to use my DNS server which is binded to bridge0 interface.
I have added --dns 172.17.42.1 in my docker_opts but no success
DNS server reply ok with dig command:
dig #172.17.42.1 registry.service.consul SRV +short
1 1 5000 registry2.node.staging.consul.
But pull with this domain fails:
docker pull registry.service.consul:5000/test
FATA[0000] Error: issecure: could not resolve "registry.service.consul": lookup registry.service.consul: no such host
PS: By adding nameserver 172.17.42.1 in my /etc/resolv.conf solve the issue but the DNS has to be exclusively for docker commands.
Any idea ?
You passed --dns 172.17.42.1 to docker_opts, so since that you should be able to resolve the container hostnames from inside other containers. But obviously you're doing docker pull from the host, not from the container, isn't it? Therefore it's not surprising that you cannot resolve container's hostname from your host, because it is not configured to use 172.17.42.1 for resolving.
I see two possible solutions here:
Force your host to use 172.17.42.1 as DNS (/etc/resolv.conf etc).
Create a special container with Docker client inside and mount docker.sock inside it. This will make you able to use all client commands including pull:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock:rw --name=client ...
docker exec -it client docker pull registry.service.consul:5000/test

Docker DNS issue on local machine

I have an issue with Docker not resolving my local DNS. Running even a basic ping will no longer work. Current version 0.11.1 running on Fedora 20. The last time I worked with docker (version 0.9) everything was fine.
sudo docker run base ping google.com
ping: unknown host google.com
My local DNS is resolving fine outside of Docker and I don't have localhost (127.0.0.1) set in my resolv.conf file. I have also tried setting the dns with the same outcome:
sudo docker run --dns=8.8.8.8 base ping google.com
ping: unknown host google.com
Any help would be greatly appreciated.
If anyone else has this issue I got it working by clearing out the iptables:
iptables -F
For a more permanent solution after restarting I listed the iptables before and after flushing but couldn't really see what was affecting it. I ended up loading the Firewall Configuration, and enabling the Masquerade zone worked. Not sure why this setting had changed or if a change in newer Docker versions now needed this to be set, but it works. Interestingly I had previously tried just enabling IP forwarding (sysctl -w net.ipv4.ip_forward=1), but this had no effect for me.

Resources