Is it possible to assign port mapping to an existing Docker container by iptables on Linux? - linux

Operating system : ubuntu 16.04 LTS ,here's my problem.
Recently I'm building a application rely on a Redis(Docker) offered data service. A customary way of creating new Redis service is kind of like this:
docker pull redis:latest
docker run -d --name redis -p 6379:6379 redis:latest
Assuming my WAN IP is 201.201.201.201 ,then it should just fine to access Redis via address:201.201.201.201:6379.
However this approach exposes a redis server's port to public network ,even though you can give a supper long password ,potential safety hazard remains.
Since docker do not support port mapping changing within a running container ,I need to shut down the whole docker service ,that takes a long time ,nearly impossible.
Then I'm trying to use IP table mapping instead of creating a docker map ,due to iptables 's flexibility ,theoretically it allows benefits from both sides -- uoload files from wherever else in the world(out side zz) ,but can also close this
iptables -t nat -A PREROUTING -d 172.245.210.199 -p tcp --dport 6379 -j DNAT --to-destination 172.17.0.5:6379
iptables -t nat -A POSTROUTING -d 172.17.0.5 -p tcp --dport 6379 -j SNAT --to 172.17.0.1
But it do not work ,I can't ping container 17.17.0.162
Does anyone knows resolution ,or could propose some other ways to implement this port mapping (such as nginx or caddy?)
Thanks!

What I suggest is to use an assistant container, in this container add port forward for your service container which did not do port mapping:
docker run -idt --link redis -p 6379:6379 alpine/socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:redis:6379
Above container will use --link redis so it can resolve the name of your redis container, and when receive the traffic from host's 6379, it will first forward to assistant container's 6379, then will use socat to forward the traffic to the redis container's 6379, so it works although your service container did not port mapping 6379.
As --link was deprecated, you can also customize your network, as you like:
docker network create my_network
docker network connect my_network redis
docker run -idt --network my_network -p 6379:6379 alpine/socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:redis:6379

Related

How to block the connection to an IP which is created in docker container on host

There is a docker container running in bridge network mode. Inside the container, it creates a connection to, say, 10.123.123.1:6666. I'd like to block this connection on host through IPTABLES, something like sudo iptables -I OUTPUT -p tcp -d 10.123.123.1 -j DROP, but it doesn't work. Could anyone help me on this please?
I can't even see this connection on host by command netstat -an, but I can see it inside the container.
I don't have to use IPTABLES, but I can't change the configuration of the docker running.
These packets are going through INPUT & OUTPUT chains in the container's network namespace, and not in the host's network namespace.
All your host network namespace does is forward these packets so you need to alter the FORWARD chain with a rule similar to iptables -I FORWARD -p tcp -d 10.123.123.1 -j DROP. Bear in mind that Docker alters iptables rules which may punch holes in the firewall.

What container interface(s) does Docker port mapping (`-p <host-port>:<container-port>`) apply to?

Which container interface does Docker port mapping (-p <host-port>:<container-port>) apply to?
After having read the Docker documentation, I'm not sure what network interface(s) in the Docker container a Docker port mapping like (1) and (2) applies to.
By default a new container is created with two interfaces eth0 and lo, but more can be added.
(1) is shorthand for -p 0.0.0.0:8080:80.
(2) refers to the network interface on the host with IP address 192.168.1.100, and it maps host port 8080 to container port 80, but on what container interface?
(1) -p 8080:80 Map TCP port 80 in the container to port 8080 on the Docker host.
(2) -p 192.168.1.100:8080:80 Map TCP port 80 in the container to port 8080 on the Docker host for connections to host IP 192.168.1.100.
Do (1) and (2) map host port 8080 to container port 80 on ALL container interfaces (0.0.0.0)? If, so where can I see that in the documentation, and can this behaviour be restricted to a specific set of container interfaces?
https://docs.docker.com/config/containers/container-networking/
TCP connections have a single starting point (IP+port) and single end point (IP+port). As such, the port forwarding can only map to one IP address in the container. Specifically, the bridge network interface.
"Can this behavior be restricted to a specific set of container interfaces" doesn't really make sense, then: by the nature of TCP, there's only ever one interface that's being connected to.
If you're worried that things listening on 127.0.0.1 inside the container will be come public, then don't; they won't (and can't).
https://pythonspeed.com/articles/docker-connection-refused/ has a bunch of diagrams that might make this clearer.
Do (1) and (2) map host port 8080 to container port 80 on ALL container interfaces (0.0.0.0)
Not exactly. (1) maps host port 8080 to container port 80 on all host interfaces. (2) maps host port 8080 on host interface with ip address 192.168.1.100 to containers port 80.
Specifically, it does something along (copying from my machine):
iptables -t nat -A PREROUTING -m addrypte --dst-type LOCAL -j DOCKER
iptables -t nat -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A DOCKER ! -i <bridge docker interface> -p tcp -m tcp --dport 8080 -j DNAT --to-destination <container-IP-address>
In net:PREROUTING the traffic for dport 8080 is redirected to container IP Address.
I believe you seem to be confused and assume that containers have multiple interfaces. Container has (usually) one interface and is connected to one docker network.

How to restrict access from internet to containers ports on remote linux server?

I use docker-compose on ubuntu 18 on remote server.
How, with iptables, can i block access from the internet to the docker port and only allow access to it from the localhost of this server?
For instance, i want to block 4150 port for internet. Trying this:
iptables -A DOCKER-USER -p tcp --dport 4150 -j DROP does not block the port - still can access to it from the internet (not from server machine).
How can i block access from internet to all ports that are on the server, but allow only 22,80 ? And keep that ports available from localhost of the server (eg from the server itself) ?
Not the IPTables based solution you're looking for, but a much simpler solution is to only publish to a specific interface, instead of all interfaces. And when that interface is the loopback interface, e.g. 127.0.0.1, you'll only be able to access the port locally. To do this, add the interface to the beginning of the publish spec:
docker run -p 127.0.0.1:4150:4150 ...
Or a similar syntax in the compose file:
...
ports:
- 127.0.0.1:4150:4150
...
As for why the command you tried using didn't work, this needs conntrack to get the original port rather than the docker mapped port:
iptables -I DOCKER-USER -p tcp -m contrack --ctorigdstport 4150 -j DROP
This also changed from -A (append) to -I (insert) because there's a default rule to accept everything in that list.

How can I forward localhost port on my container to localhost on my host?

I have a daemon on my host running on some port (i.e. 8008) and my code normally interacts with the daemon by contacting localhost:8008 for instance.
I've now containerized my code but not yet the daemon.
How can I forward the localhost:8008 on my container to localhost:8008 on the host running the container (and therefore the daemon as well).
The following is netstat -tlnp on my host. I'd like the container to forward localhost:2009 to localhost:2009 on the host
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2009 0.0.0.0:* LISTEN 22547/ssh
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:2009 :::* LISTEN 22547/ssh
So the way you need to think about this is that Docker containers have their own network stack (unless you explicitly tell it to share the host's stack with --net=host). This means ports need to be exposed both inside the docker container and also on the outside (documentation), when linked with host ports. The ports exposed on the container need to be bound to the host ports explicitly (with -p xxxx:yyyy in your docker run command) or implicitly (using EXPOSE in your Dockerfile and using -P on the command line), like it says here. If your Dockerfile does not contain EXPOSE 8008, or you do not specify --expose 8008 in your docker run command, your container can't talk to the outside world, even if you then use -p 8008:8008 in your docker run command!
So to get tcp/8008 on the host linked with tcp/8008 on the container, you need EXPOSE 8008 inside your Dockerfile (and then docker build your container) OR --expose 8008 in your docker run command. In addition, you need to either use -P to implicitly or -p 8008:8008 to explicitly link that exposed container port to the host port. An example docker run command to do this might look like:
docker run -it --expose 8008 -p 8008:8008 myContainer
It's handy to remember that in the -p 8008:8008 command line option, the order for this operation is -p HOST_PORT:CONTAINER_PORT. Also, don't forget that you won't be able to SSH into your container from another machine on the internet unless you also have this port unblocked in iptables on the host. I always end up forgetting about that and waste half an hour before I remember I forgot to iptables -A INPUT ... for that specific tcp port on the host machine. But you should be able to SSH from your host into the container without the iptables rule, since it uses loopback for local connections. Good luck!
TL;DR: You can use the special hostname host.docker.internal instead of localhost anywhere inside the container that you want to access localhost on the host. Note that:
macOS and Windows versions of Docker Desktop have this feature enabled by default.
Linux hosts (using Docker v 20.10 and above - since December 14th 2020) require you to add --add-host=host.docker.internal:host-gateway to your Docker command to enable the feature.
Docker Compose on Linux requires you to add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
Full answer: Is the host running MacOS or Windows? Buried in the documentation for Docker Desktop is the fact that there is no docker0 bridge on MacOS and there is no docker0 bridge on Windows. Apparently that's the cause of this. In both cases the workaround (given right after, in a subsection titled "Use cases and workarounds") is to use the special hostname host.docker.internal in placed of localhost anywhere inside the container that you want to access localhost on the host.
If the host is Linux, there are some Linux-only techniques for achieving this. However, host.docker.internal is also useable with a Linux host, but it has to be enabled first. See the Linux part of the TL;DR, above, for instructions.
By this method, in OP's case host.docker.internal:8008 should be used instead of localhost:8008. Note that this is a code or configuration change of the application code running inside the container. There is no need to mention the port in the container configuration. Do not try to use -p or --expose in the docker run commandline. Not only is it not necessary, but your container will fail to start if the host application you want the container to connect to is already listening on that port.
After checked the answers and did some investigation, I believe there are 2 ways of doing that and these 2 only work in Linux environment.
The first is in this post How to access host port from docker container
The second should be set your --network=host when you docker run or docker container create. In this case, your docker will use the same network interface you use in Mac.
However, both ways above cannot be used in Mac, so I think it is not possible to forward from the container to host in Mac environment. Correct me if I am wrong.
I'm not sure if you can do that just with docker's settings.
If my under standing is correct, expose port is not what you looking for.
Instead, establish ssh port forwarding from container to host mightbe the answer.
You can easily use -p 127.0.0.1:8008:8008 and forward the container's port 8008 to localhost's port 8008. An example docker command would be:
docker run -it -p 127.0.0.1:8008:8008 <image name>
If you're doing this on your local machine, you can simple specify the network type as host when starting your container (--network host), which will make your host machine share network with your docker container.
eg:
Start your container:
docker run -it --rm --network host <container>
On your host machine, Run:
python3 -m http.server 8822
Now from your container run:
curl 127.0.0.1:8822
If all went well you'll see traffic on your host terminal.
Serving HTTP on 0.0.0.0 port 8822 (http://0.0.0.0:8822/) ...
127.0.0.1 - - [24/Jan/2023 22:37:01] "GET / HTTP/1.1" 200 -
docker run -d --name <NAME OF YOUR CONTAINER> -p 8008:8008 <YOUR IMAGE>
That will publish port 8008 in your container to 8008 on your host.

Forward host port to docker container

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.
I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.
EDIT: some clarification:
Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT STATE SERVICE
6311/tcp open unknown
joelkuiper#vps20528 ~ % docker run -i -t base /bin/bash
root#f043b4b235a7:/# apt-get install nmap
root#f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway
Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT STATE SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds
I had to do this trick to get any internet connection within the container: My firewall is blocking network connections from the docker container to outside
EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.
A simple but relatively insecure way would be to use the --net=host option to docker run.
This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.
This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.
For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:
docker run --rm -i -t --net=host my_image telnet localhost 25
If you consider doing it this way, please see the caution about security on this page:
https://docs.docker.com/articles/networking/
It says:
--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.
Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run
ip addr
This will give you a list of network adapters, one of which will look something like
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
valid_lft forever preferred_lft forever
You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.
As stated in one of the comments, this works for Mac (probably for Windows/Linux too):
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
You can also reach the gateway using gateway.docker.internal.
Quoted from https://docs.docker.com/docker-for-mac/networking/
This worked for me without using --net=host.
You could also create an ssh tunnel.
docker-compose.yml:
---
version: '2'
services:
kibana:
image: "kibana:4.5.1"
links:
- elasticsearch
volumes:
- ./config/kibana:/opt/kibana/config:ro
elasticsearch:
build:
context: .
dockerfile: ./docker/Dockerfile.tunnel
entrypoint: ssh
command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"
docker/Dockerfile.tunnel:
FROM buildpack-deps:jessie
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install ssh && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/config && \
chown $USER:$USER -R /root/.ssh
config/ssh/config:
# Elasticsearch Server
Host elasticsearch
HostName jump.host.czerasz.com
User czerasz
ForwardAgent yes
IdentityFile ~/.ssh/id_rsa
This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.
TLDR;
For local development only, do the following:
Start the service or SSH tunnel on your laptop/computer/PC/Mac.
Build/run your Docker image/container to connect to hostname host.docker.internal:<hostPort>
Note: There is also gateway.docker.internal, which I have not tried.
END_TLDR;
For example, if you were using this in your container:
PGPASSWORD=password psql -h localhost -p 5432 -d mydb -U myuser
change it to this:
PGPASSWORD=password psql -h host.docker.internal -p 5432 -d mydb -U myuser
This magically connects to the service running on my host machine. You do not need to use --net=host or -p "hostPort:ContainerPort" or -P
Background
For details see: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
I used this with an SSH tunnel to an AWS RDS Postgres Instance on Windows 10. I only had to change from using localhost:containerPort in the container to host.docker.internal:hostPort.
I had a similar problem accessing a LDAP-Server from a docker container.
I set a fixed IP for the container and added a firewall rule.
docker-compose.yml:
version: '2'
services:
containerName:
image: dockerImageName:latest
extra_hosts:
- "dockerhost:192.168.50.1"
networks:
my_net:
ipv4_address: 192.168.50.2
networks:
my_net:
ipam:
config:
- subnet: 192.168.50.0/24
iptables rule:
iptables -A INPUT -j ACCEPT -p tcp -s 192.168.50.2 -d $192.168.50.1 --dport portnumberOnHost
Inside the container access dockerhost:portnumberOnHost
If MongoDB and RabbitMQ are running on the Host, then the port should already exposed as it is not within Docker.
You do not need the -p option in order to expose ports from container to host. By default, all port are exposed. The -p option allows you to expose a port from the container to the outside of the host.
So, my guess is that you do not need -p at all and it should be working fine :)
For newer versions of Docker, this worked for me. Create the tunnel like this (notice the 0.0.0.0 at the start):
-L 0.0.0.0:8080:localhost:8081
This will allow anyone with access to your computer to connect to port 8080 and thus access port 8081 on the connected server.
Then, inside the container just use "host.docker.internal", for example:
curl host.docker.internal:8081
why not use slightly different solution, like this?
services:
kubefwd:
image: txn2/kubefwd
command: ...
app:
image: bash
command:
- sleep
- inf
init: true
network_mode: service:kubefwd
REF: txn2/kubefwd: Bulk port forwarding Kubernetes services for local development.
Easier way under all platforms nowadays is to use host.docker.internal. Let's first start with the Docker run command:
docker run --add-host=host.docker.internal:host-gateway [....]
Or add the following to your service, when using Docker Compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Full example of such a Docker Compose file should then look like this:
version: "3"
services:
your_service:
image: username/docker_image_name
restart: always
networks:
- your_bridge_network
volumes:
- /home/user/test.json:/app/test.json
ports:
- "8080:80"
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
your_bridge_network:
Again, it's just an example. But in if this docker image will start a service on port 80, it will be available on the host on port 8080.
And more importantly for your use-case; if the Docker container want to use a service from your host system that would now be possible using the special host.docker.internal name. That name will automatically be resolved into the internal Docker IP address (of the docker0 interface).
Anyway, let's say... you also running a web service on your host machine on (port 80). You should now be able to reach that service within your Docker container.. Try it out: nc -vz host.docker.internal 80.
All WITHOUT using network_mode: "host".

Resources