Docker-compose: connect from Docker container to linux host - linux

In "naked" Docker, I can use --network=host to make the host reachable from the container through the docker0 interface at 127.0.0.1, or I can reach the host at 172.17.0.1.
Ostensibly docker-compose supports the host networking driver through network_mode: host. Unfortunately, this prevents me from using exposed ports. Containers created using docker-compose don't live on the docker0 interface, and hence can reach the host at some IP address like 172.18.0.1 or 172.19.0.1, decided at startup.
Even once I know that address, it doesn't seem to be reachable from the container.
How can I connect from the container to the host?

Related

What's the different between host mode and bridge mode?

When I use the host mode to run mycontainer, it doesn't work,but the bridge mode works.I'd like to ask what's the different between the two modes?
Run with host mode:
docker run --name=zhiwenyi --net=host -d [image]
Run with bridge mode:docker run --name=zhiwenyi -d -p 35229:35229 [image]
Dockerfile:
FROM java:8
VOLUME /data/log/upload
COPY target/upload_V6_20220722.jar upload.jar
EXPOSE 35229
RUN bash -c "touch /upload.jar"
ENTRYPOINT ["java","-Xmx512m","-Xms512m","-jar","-Duser.timezone=GMT+08","-Dfile.encoding=utf-8","upload.jar"]
In bridge mode, I send a post request to myip:35229/path ,it works well.
In host mode, the same request ,it shows me connection time out.
Supply:
OS:centos 7.9
Docker:20.10.17
If you use host mode it disables the container's network namespace.
A bridge network in the context of Docker employs a software bridge to
provide isolation from containers that are not linked to that bridge
network while enabling communication between containers that are
connected to the same bridge network.
The network stack of a container that uses the host network driver is
not separated from the Docker host. For instance, if you utilize host
networking and run a container that binds to port 80, the application
for the container will be accessible on port 80 on the host's IP
address.

Bind docker container loopback to the host loopback

I pull a docker image (will use python 3 as an example)
docker pull python:3.6
Then I launch a docker container
docker run -it -p 127.0.0.1:8000:8000 python:3.6 bash
(note that here 127.0.0.1 in 127.0.0.1:8000:8000 allows to specify the destination, host IP but not the source)
So if I launch a server inside the container at 0.0.0.0:
python -m http.server 8000 --bind 0.0.0.0
then I can access the container's server from the host machine without any problem by going to http://127.0.0.1:8000 at the host machine
However if my docker server binds to 127.0.0.1 instead of 0.0.0.0:
python -m http.server 8000 --bind 127.0.0.1
then accessing http://127.0.0.1:8000 from the host does not work.
What's the proper way of binding the container's loopback 127.0.0.1 to the host loopback?
What's the proper way of binding the container's loopback 127.0.0.1 to the host loopback?
On Linux, this can be done by configuring your Docker container to use the hosts network namespace, ie:
docker run --network=host
This only works on Linux because on Linux, your machine is the host, and the containers run as containers in your machines OS. On Windows/OSX, the Docker host runs as a virtual machine, with the containers running in the virtual machine, and so they can't share your machines network namespace.
What's the proper way of binding the container's loopback 127.0.0.1 to the host loopback?
You can't do that. The loopback interface inside a container means "only this container", just like on the host means "only this host". If a service is binding to 127.0.0.1 then there is no way -- from your host or from another container -- to reach that service.
The only way to do what you want is either:
Modify the application configuration to listen on all interfaces (or eth0 specifically), or
Run a proxy inside your container that binds to all interfaces and forwards connections to the localhost address.

What does localhost means inside a Docker container?

Say, if I use this command inside a docker container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
What would the localhost here refer to? The host machine's IP or the docker container's own IP?
From inside a container, localhost always refers to the current container. It never refers to another container, and it never refers to anything else running on your physical system that's not in the same container. It's not usually useful to make outbound connections to localhost or configure localhost as your database host.
From a shell on your host system, localhost could refer to daemons running on your system outside Docker, or to ports you've published with docker run -p options.
From a different system, localhost refers to the system it's called from.
In terms of IP addresses, localhost is always 127.0.0.1, and that IP address is special and is always localhost and behaves the same way as above.
If you want to make a connection to a container...
...from another container, the best way is to make sure they're on the same Docker network (you started them from the same Docker Compose YAML file; you did a docker network create and then did docker run --net ... on the same network) and use Docker's internal DNS service to refer to them by the container's --name or its name in the Docker Compose YAML file and the port number inside the container. Even if the target has a published port with a docker run -p option or Docker Compose ports: setting, use the second (container-internal) port number.
...from outside Docker space, make sure you started the container with a docker run -p or Docker Compose ports: option, and connect to the host's IP address or DNS name using the first port number from that option.
...from a terminal window or browser on the same physical host, not in a container, in this case and in this case only, localhost will work consistently.
Except:
If you started a container with --net host, localhost refers to the physical host, and you're in the "terminal window on the same physical host" scenario.
If you've gone out of your way to have multiple servers in the same container, you can use localhost to communicate between them.
If you're running in Kubernetes, and you have multiple containers in the same pod, you can use localhost to communicate between them. Between pods, you should set up a service in front of each pod/deployment, and use DNS names of the form service-name.namespace-name.svc.cluster.local.
Definitely, It will be your container, if you are running command in container.
/opt/lampp/bin/mysql -h localhost -u root -pThePassword
If you run this command inside container then it will try to connect mysql running inside container.

Run dnsmasq as DHCP server from inside a Docker container

I'm trying to get dnsmasq to operate as a DHCP server inside a Docker container, issuing DHCP addresses to machines on the host's physical network. I'm using the Alpine Linux 6MB container from https://hub.docker.com/r/andyshinn/dnsmasq/.
It works fine as a DNS server on port 53 on the host machine, however there is nothing listening on port 67/udp, which is where I'm expecting DHCP to be. I use
dhcping 192.168.2.2, but get "no answer". telnet 192.168.2.2 67 returns "Connection refused".
My dnsmasq.conf file in the container looks like this:
interface=eth0
user=root
domain-needed
bogus-priv
no-resolv
local=/mydomain.io/
no-poll
server=8.8.8.8
server=8.8.4.4
no-hosts
addn-hosts=/etc/dnsmasq_static_hosts.conf
expand-hosts
domain=mydomain.io
dhcp-range=192.168.2.10,192.168.2.250,255.255.255.0,192.168.2.255,5m
# Have windows machine release on shutdown
dhcp-option=vendor:MSFT,2,1i
# No default route
dhcp-option=3
The host machine has a static address of 192.168.2.2.
I start the container like this:
docker run -d --name dns -p 192.168.2.2:67:67/udp -p 192.168.2.2:53:53/udp sitapati/dns
There is no firewall on this machine, which is running Ubuntu 16.04.
Things I've thought of/tried:
is it because eth0 in the container has an address on a completely different subnet? (docker inspect tells me it's 172.17.0.2 on the bridged interface)
does it need to use --net host? I tried that, and it still didn't work.
Yes, the container will have its own interfaces on a virtual subnet (the docker0 bridge network). So it will be trying to offer addresses on that subnet.
Using --net host worked for me, I got the DHCP server working using something like the following command:
docker run --name dnsmasq2 -t -v /vagrant/dnsmasq.conf:/opt/dnsmasq.conf -p 67:67/udp --net host centos
--net host ensures that the container appears to using the host's networking stack rather than its own.
dnsmasq -q -d --conf-file=/opt/dnsmasq.conf --dhcp-broadcast
I also needed to add the --dhcp-broadcast flag to dnsmasq within the container to get it to actually broadcast DHCPOFFER messages on the network. For some reason, dnsmasq was trying to unicast the DHCPOFFER messages, and it was using ARP to try to get an address that had not yet been assigned.

How can I forward localhost port on my container to localhost on my host?

I have a daemon on my host running on some port (i.e. 8008) and my code normally interacts with the daemon by contacting localhost:8008 for instance.
I've now containerized my code but not yet the daemon.
How can I forward the localhost:8008 on my container to localhost:8008 on the host running the container (and therefore the daemon as well).
The following is netstat -tlnp on my host. I'd like the container to forward localhost:2009 to localhost:2009 on the host
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2009 0.0.0.0:* LISTEN 22547/ssh
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:2009 :::* LISTEN 22547/ssh
So the way you need to think about this is that Docker containers have their own network stack (unless you explicitly tell it to share the host's stack with --net=host). This means ports need to be exposed both inside the docker container and also on the outside (documentation), when linked with host ports. The ports exposed on the container need to be bound to the host ports explicitly (with -p xxxx:yyyy in your docker run command) or implicitly (using EXPOSE in your Dockerfile and using -P on the command line), like it says here. If your Dockerfile does not contain EXPOSE 8008, or you do not specify --expose 8008 in your docker run command, your container can't talk to the outside world, even if you then use -p 8008:8008 in your docker run command!
So to get tcp/8008 on the host linked with tcp/8008 on the container, you need EXPOSE 8008 inside your Dockerfile (and then docker build your container) OR --expose 8008 in your docker run command. In addition, you need to either use -P to implicitly or -p 8008:8008 to explicitly link that exposed container port to the host port. An example docker run command to do this might look like:
docker run -it --expose 8008 -p 8008:8008 myContainer
It's handy to remember that in the -p 8008:8008 command line option, the order for this operation is -p HOST_PORT:CONTAINER_PORT. Also, don't forget that you won't be able to SSH into your container from another machine on the internet unless you also have this port unblocked in iptables on the host. I always end up forgetting about that and waste half an hour before I remember I forgot to iptables -A INPUT ... for that specific tcp port on the host machine. But you should be able to SSH from your host into the container without the iptables rule, since it uses loopback for local connections. Good luck!
TL;DR: You can use the special hostname host.docker.internal instead of localhost anywhere inside the container that you want to access localhost on the host. Note that:
macOS and Windows versions of Docker Desktop have this feature enabled by default.
Linux hosts (using Docker v 20.10 and above - since December 14th 2020) require you to add --add-host=host.docker.internal:host-gateway to your Docker command to enable the feature.
Docker Compose on Linux requires you to add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
Full answer: Is the host running MacOS or Windows? Buried in the documentation for Docker Desktop is the fact that there is no docker0 bridge on MacOS and there is no docker0 bridge on Windows. Apparently that's the cause of this. In both cases the workaround (given right after, in a subsection titled "Use cases and workarounds") is to use the special hostname host.docker.internal in placed of localhost anywhere inside the container that you want to access localhost on the host.
If the host is Linux, there are some Linux-only techniques for achieving this. However, host.docker.internal is also useable with a Linux host, but it has to be enabled first. See the Linux part of the TL;DR, above, for instructions.
By this method, in OP's case host.docker.internal:8008 should be used instead of localhost:8008. Note that this is a code or configuration change of the application code running inside the container. There is no need to mention the port in the container configuration. Do not try to use -p or --expose in the docker run commandline. Not only is it not necessary, but your container will fail to start if the host application you want the container to connect to is already listening on that port.
After checked the answers and did some investigation, I believe there are 2 ways of doing that and these 2 only work in Linux environment.
The first is in this post How to access host port from docker container
The second should be set your --network=host when you docker run or docker container create. In this case, your docker will use the same network interface you use in Mac.
However, both ways above cannot be used in Mac, so I think it is not possible to forward from the container to host in Mac environment. Correct me if I am wrong.
I'm not sure if you can do that just with docker's settings.
If my under standing is correct, expose port is not what you looking for.
Instead, establish ssh port forwarding from container to host mightbe the answer.
You can easily use -p 127.0.0.1:8008:8008 and forward the container's port 8008 to localhost's port 8008. An example docker command would be:
docker run -it -p 127.0.0.1:8008:8008 <image name>
If you're doing this on your local machine, you can simple specify the network type as host when starting your container (--network host), which will make your host machine share network with your docker container.
eg:
Start your container:
docker run -it --rm --network host <container>
On your host machine, Run:
python3 -m http.server 8822
Now from your container run:
curl 127.0.0.1:8822
If all went well you'll see traffic on your host terminal.
Serving HTTP on 0.0.0.0 port 8822 (http://0.0.0.0:8822/) ...
127.0.0.1 - - [24/Jan/2023 22:37:01] "GET / HTTP/1.1" 200 -
docker run -d --name <NAME OF YOUR CONTAINER> -p 8008:8008 <YOUR IMAGE>
That will publish port 8008 in your container to 8008 on your host.

Resources