I am trying to use a script made my someone else to spin up some docker containers on my Linux Mint 18.1 machine. When I first tried to execute the script (which I unfortunately cannot include) I received an error message that contained the following:
listen tcp 0.0.0.0:53: bind: address already in use
When I used netstat to find out what was using the port I discovered that it was dnsmasq. I killed the process (knowing it would break my internet, which it did) and I was able to create the containers. So it appears the only issue is the port conflict.
In the guide to the script and in other answers it has been mentioned to add nameserver 127.0.0.1. I did that but it didn't do anything for me. I have read other answers saying that I cannot change the port that dnsmasq uses and I also can't change the port of the docker image. Is there some way I can run both of these?
Unless the docker container HAVE to listen on port 53, you CAN change it by changing the -p option of the docker run command launching that container.
Related
This question appears to have been asked many times, but the answers appear to be outdated, or just not work.
I'm on a Linux system without a RTC (a raspberry pi). My host runs an ntp daemon (ntpd), which checks the time online as soon as the host boots up, assuming it has internet, and sets the system clock.
The code inside my container needs to know if the host's system clock is accurate (has been updated since last boot).
On the host itself, this is very easy to do - use something like ntpdate -q 127.0.0.1. ntpdate connects to 127.0.0.1:123 over udp, and checks with the ntpd daemon if the clock is accurate (if it's been updated since last boot). This appears to be more difficult to do from within a container.
If I start up a container, and use docker container inspect NAME to see the container's IP, it shows me this:
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.6",
If I run ntpdate -q 172.19.0.1 within the container, this works. Unfortunately, 172.19.0.1 isn't a permanent IP for the host. It that subnet is already taken when the container is starting up, the subnet will change, so hardcoding this IP is a bad idea. What I need is an environment variable that always reflects the proper IP for the host.
Windows and MacOS versions of docker appear to set the host.docker.internal hostname within containers, but Linux doesn't. Some people recommend setting this in the /etc/hosts file of the host, but then you're just hardcoding the IP, which again, can change.
I run my docker container with a docker-compose.yml file, and apparently, on new versions of docker, you can do this:
extra_hosts:
- "host.docker.internal:host-gateway"
I tried this, and this works. Sort of. Inside my container, host.docker.internal resolves to 172.17.0.1, which is IP of the docker0 interface on the host. While I can ping host.docker.internal from within the container, using ntpdate -q host.docker.internal or ntpdate -q 172.17.0.1 doesn't work.
Is there a way to make host.docker.internal resolve to the proper gateway IP of the host from within the container? In my example, 172.19.0.1.
Note: Yes, I can use code within the container to check what the container's gateway is with netstat or similar, but then I need to complicate my code, making it figure out the IP of the NTP server (the docker host). I can probably also pass the docker socket into the container, and try to get the docker host's IP through that, but that seems super hackey, and an unnecessary security issue.
The best solution I've found is to use the ip command from the iproute2 package, look for the default route and use the gateway address for it.
ip route | awk '/default/ {print $3}'
if you want it in an environment variable, you can set it in an entrypoint script with
export HOST_IP_ADDRESS=$(ip route | awk '/default/ {print $3}')
So I want to connect to my mongodb running on my host machine (DO droplet, Ubuntu 16.04). It is running on the default 27017 port on localhost.
I then use mup to deploy my Meteor app on my DO droplet, which is using docker to run my Meteor app inside a container. So far so good.
A standard mongodb://... connection url is used to connect the app to the mongodb.
Now I have the following problem:
mongodb://...#localhost:27017... obviously does not work inside the docker container, as localhost is not the host's localhost.
I already read many stackoverflow posts on this, I already tried using:
--network="host" - did not work as it said 0.0.0.0:80 is already in use or something like that (nginx proxy)
--add-host="local:<MY-DROPLET-INTERNET-IP>" and connect via mongodb://...#local:27017...: also not working as I can access my mongodb only from localhost, not from the public IP
This has to be a common problem!
tl;dr - What is the proper way to expose the hosts localhost inside a docker container so I can connect to services running on the host? (including their ports, e.g. 27017).
I hope someone can help!
You can use: 172.17.0.1 as it is the default host ip that the containers can see. But you need to configure Mongo to listen to 0.0.0.0.
From docker 18.03 onwards the recommendation is to connect to the special DNS name host.docker.internal
For previous versions you can use DNS names docker.for.mac.localhost or docker.for.windows.localhost.
change the bindIp from 127.0.0.1 to 0.0.0.0 in /etc/mongod.conf. Then it will work
or start mongod on ubuntu with a flag to bind all ip address as a temporary workaround (dev/learning purposes)
$ mongod --bind_ip_all
Tried 100500 variants for Windows (using docker desktop), but without any result...
Unfortunately, currently, Windows (at least docker desktop) is not supporting --net=host
Quoted from: https://docs.docker.com/network/network-tutorial-host/#prerequisites
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
You can try to use https://docs.docker.com/toolbox/
So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.
I am running docker on OSX via boot2docker. I am using docker remotely, via the API.
I create several images of a web server. Docker assigns different IP address to each container, like 172.17.0.61. Each web server is running on port 8080.
Inside VM, I can ping the server on this address.
How can I map these different container IP addresses (from VM) to the same one in VM, but on different port? E.G.
<local.ip>:9001 -> 172.17.0.61:8080
<local.ip>:9002 -> 172.17.0.62:8080
where local.ip may be either ip from boot2docker or anything else.
Possible solution is to define port bindings when creating container and bind each container to a different port. However, I would like to avoid that, since this config becomes part of the container, and only exist because running on OSX. If I do all this above on linux, we would not have this issue.
How to map inner containers to different ports?
Publishing ports is the right solution. You have the same problem whether you're running remotely or locally, just the IP address changes.
For example, say I start the following web servers:
$ docker run -d -p 8000:80 nginx
$ docker run -d -p 8001:80 nginx
From inside the VM (run boot2docker ssh), I can then run curl localhost:8000 or curl localhost:8001 to reach the website. This is the normal way of working with Docker on Linux. From the Mac command line, it becomes curl $(boot2docker ip):8000 because of the VM, but we've not done anything different with regards to starting the web servers because of boot2docker.
I have been following the tutorials and been experimenting with Docker for a couple of days but I can't find any "real-world" usage example..
How can I communicate with my container from the outside?
All examples I can find ends up with 1 or more containers, they can share ports with their-others, but no-one outside the host gets access to their exposed ports.
Isn't the whole point of having containers like this, that at least 1 of them needs to be accessible from the outside?
I have found a tool called pipework (https://github.com/jpetazzo/pipework) which probably will help me with this. But is this the tool everyone testing out Docker for production what they are using?
Is a "hack" necessary to get the outside to talk to my container?
You can use the argument -p to expose a port of your container to the host machine.
For example:
sudo docker run -p80:8080 ubuntu bash
Will bind the port 8080 of your container to the port 80 of the host machine.
Therefore, you can then access to your container from the outside using the URL of the host:
http://you.domain -> losthost:80 -> container:8080
Is that what you wanted to do? Or maybe I missed something
(The parameter -expose only expose port to other containers (not the host))
This (https://blog.codecentric.de/en/2014/01/docker-networking-made-simple-3-ways-connect-lxc-containers/) blog post explains the problem and the solution.
Basicly, it looks like pipeworks (https://github.com/jpetazzo/pipework) is the way to expose container ports to the outside as of now... Hope this gets integrated soon..
Update: In this case, iptables was to blame, and there was a rule that blocked forwarded traffic. Adding -A FORWARD -i em1 -o docker0 -j ACCEPT solved it..