Is there a way to configure Docker's embedded DNS server's upstream nameserver's port? - linux

General context
Docker daemon comes with an embedded DNS server. It resolves local Docker swarm and network records and forwards queries for external records to an upstream nameserver configured with --dns 1.
Docs say you can set an IP address for this upstream nameserver with --dns=[IP_ADDRESS...]. The default port used is 53.
My question
Can I configure the port used as well?
My host's /etc/docker/daemon.json shows "dns": ["10.99.0.1"],. Is there a way for me to specify something like "dns": ["10.99.0.1:53"], so that dockerd always knows to forward DNS queries to port 53?
My use case
In my case, 10.99.0.1 is the IP of a localhost bridge interface. I run a local DNS caching server on this host. So DNS queries sent to 10.99.0.1:53 work. But dockerd forwards queries originating from containers connected to user-defined bridge networks (created with docker network create) to non-standard ports it picks. See terminal output below.
Detailed terminal output and debugging info
"toogle" is a Docker container connected to a Docker network I created with docker network create. 127.0.0.11 is another loopback address. DNS queries originating from within Docker containers connected to user-defined Docker networks are destined for this IP.
Is Docker's embedded DNS server actually running?
DNS queries are routed by toogle's firewall rules this way.
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- * * 0.0.0.0/0 127.0.0.11
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- * * 0.0.0.0/0 127.0.0.11
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 127.0.0.11 tcp dpt:53 to:127.0.0.11:37619 <-- look at this rule
0 0 DNAT udp -- * * 0.0.0.0/0 127.0.0.11 udp dpt:53 to:127.0.0.11:58552 <-- look at this rule
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- * * 127.0.0.11 0.0.0.0/0 tcp spt:37619 to::53
0 0 SNAT udp -- * * 127.0.0.11 0.0.0.0/0 udp spt:58552 to::53
Whatever's listening on those ports is accepting TCP and UDP connections
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 37619 -vz
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 37619 (?) open
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 58552 -vzu
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 58552 (?) open
But there's no DNS reply from either
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 58552 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 58552 accounts.google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 37619 accounts.google.com +tcp
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 37619 accounts.google.com +tcp
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
dockerd is listening for DNS queries at that IP and port from within toogle.
$ sudo nsenter -n -p -t $(docker inspect --format {{.State.Pid}} toogle) ss -utnlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 127.0.0.11:58552 0.0.0.0:* users:(("dockerd",pid=10984,fd=38))
tcp LISTEN 0 128 127.0.0.11:37619 0.0.0.0:* users:(("dockerd",pid=10984,fd=40))
tcp LISTEN 0 128 *:80 *:* users:(("toogle",pid=12150,fd=3))
But dockerd is trying to forward the DNS query to 10.99.0.1, which is my docker0 bridge network interface.
$ sudo journalctl --follow -u docker
-- Logs begin at Tue 2019-11-05 18:17:27 UTC. --
Apr 22 15:43:12 my-host dockerd[10984]: time="2021-04-22T15:43:12.496979903Z" level=debug msg="[resolver] read from DNS server failed, read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout"
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496539033Z" level=debug msg="Name To resolve: accounts.google.com."
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496958664Z" level=debug msg="[resolver] query accounts.google.com. (A) from 172.20.0.127:51642, forwarding to udp:10.99.0.1"
dockerd forwards the DNS query that asks for nameserver 127.0.0.11:58552 to 10.99.0.1 but only changes the IP and not the port. So the DNS query is forwarded to 10.99.0.1:58552 and nothing is listening at that port.
$ dig #10.99.0.1 -p 58552 accounts.google.com
[NO RESPONSE]
$ nc 10.99.0.1 58552 -vz
10.99.0.1: inverse host lookup failed: Unknown host
(UNKNOWN) [10.99.0.1] 58552 (?) : Connection refused
A DNS query to 10.99.0.1:53 works as expected.
dig #10.99.0.1 -p 53 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #10.99.0.1 -p 53 accounts.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53674
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;accounts.google.com. IN A
;; ANSWER SECTION:
accounts.google.com. 235 IN A 142.250.1.84
;; Query time: 0 msec
;; SERVER: 10.99.0.1#53(10.99.0.1)
;; WHEN: Thu Apr 22 17:20:09 UTC 2021
;; MSG SIZE rcvd: 64

I don't think there's a way to do this. I also misread the output. Docker daemon was forwarding to port 53.
read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout

Related

DNS forwarding using systemd-resolved in a consul cluster setup

I am exploring consul for service discovery purpose. Following is my consul cluster setup. I am having 3 consul servers & 2 consul clients machines in my cluster on CentOS 7 machine.
consulserver-01 - 192.168.30.112
consulserver-02 - 192.168.30.113
consulserver-03 - 192.168.30.114
consulclient-01 - 192.168.30.115
consulclient-02 - 192.168.30.116
I have successfully registered 3 .NET Core services named service1, service2 and service3 in this cluster which are running on 192.168.30.116 node.Now I want to resolve these services using dns. When i run dig #192.168.30.112 -p 8600 service1.service.consul SRV then it successfully resolves it. Now I want to use the systemd-resolved service to resolve it automatically. For this I have install systemd-resolved package on my consul client i.e. 192.168.30.116 machine and enter the following entries in the /etc/systemd/resolved.conf as
DNS=192.168.30.112
Domains=~consul
as per the link https://learn.hashicorp.com/tutorials/consul/dns-forwarding. I have also done the following iptables entries on my consul client i.e. 192.168.30.116
iptables -t nat -A OUTPUT -d 192.168.30.112 -p udp -m udp --dport 53 -j REDIRECT --to-ports 8600
iptables -t nat -A OUTPUT -d 192.168.30.112 -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 8600
Now i expect that when i write ping service1.service.consul on 192.168.30.116 it should give me proper IP address i.e. 192.168.30.116.
My response for dig #192.168.30.112 -p 8600 service1.service.consul SRV is as follows
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 <<>> #192.168.30.112 -p 8600 service1.service.consul SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9837
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;service1.service.consul. IN SRV
;; ANSWER SECTION:
service1.service.consul. 0 IN SRV 1 1 5555 vmdev0090.node.dc1.consul.
;; ADDITIONAL SECTION:
vmdev0090.node.dc1.consul. 0 IN A 192.168.30.116
vmdev0090.node.dc1.consul. 0 IN TXT "consul-network-segment="
;; Query time: 0 msec
;; SERVER: 192.168.30.112#8600(192.168.30.112)
;; WHEN: Thu Jul 15 11:22:25 IST 2021
;; MSG SIZE rcvd: 149
I don't know what I am doing wrong or whether my understanding is wrong. Please help in this regard.

Netcat server and intermittent UDP datagram loss

The client on enp4s0 (192.168.0.77) is sending short text-messages permanently to 192.168.0.1:6060. The server on 192.168.0.1 listen on 6060 via nc -ul4 --recv-only 6060
A ping (ping -s 1400 192.168.0.77) from server to client works fine. Wireshark is running on 192.168.0.1 (enp4s0) and detects that all datagrams are correct. There are no packages missing.
But netcat (as also a simple UDP-server) receives only sporadic on datagrams.
Any Idea what's going wrong?
System configuration:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# ip route
default via 192.168.77.1 dev enp0s25 proto dhcp metric 100
default via 192.168.0.1 dev enp4s0 proto static metric 101
192.168.0.0/24 dev enp4s0 proto kernel scope link src 192.168.0.1 metric 101
192.168.77.0/24 dev enp0s25 proto kernel scope link src 192.168.77.25 metric 100
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1
# uname -a
Linux nadhh 5.1.20-300.fc30.x86_64 #1 SMP Fri Jul 26 15:03:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Can't access nginx accessible on private ip but not on public ip

I'm creating resources using terraform and I have a node in the subnet 172.1.0.0 with no security group or rules assigned to the node; the node has two endpoints 22 and 80.
I used nmap to confirm that the ports are open and they are:
nmap -Pn -sT -p T:11 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4
Host is up (0.31s latency).
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 1.34 seconds
nmap -Pn -sT -p T:80 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4)
Host is up (0.036s latency).
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
I've installed nginx on the node and running curl 172.1.0.4 and curl 127.0.0.1 and curl 0.0.0.0 gets me the default nginx page.
However running curl <public ip> or curl <dns name> hangs and I get a
curl: (56) Recv failure: Connection reset by peer
My iptables rules are:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT udp -- anywhere anywhere udp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Why can't I possibly access nginx at the node's public ip?
The node is in a virtual network 172.1.0.0/16 and a subnet of 172.1.0.0/24 I can post the terraform configs if needed.

How do I configure Docker to work with my ens34 network interface (instead of eth0)?

does anyone know how docker decides which NIC will work with the docker0 network? I have a node with two interfaces (eth0 and ens34), however, only the requests that go through eth0 are forwarded to the container.
When my VM was provisioned and Docker was installed, I started a very silly test: I created a centos vm, installed netcat on it and committed the image. Then I started a daemon container listening on port 8080. I used:
docker -it -p 8080:8080 --name nc-server nc-server nc -vv -l 8080
So I tried to connect to the container listening on port 8080 from another node in the same network (in the same IP address as the interface ens34). It did not work.
Whereas when I sent a request from another machine to the IP address from eth0, I saw some reaction in the container (the communication worked). I was "tailing" its output with:
docker logs -ft nc-server
My conclusion with this experiment: there's some mysterious relationship between eth0 (primary NIC) and docker0, and the requests that are sent to ens34 (10.) interface are never forwarded to the veth / docker0 interfaces, only the requests that go through eth0 (9.*). Why's that?
Also, I know I can make everything work if I use --net=host, but I don't want to use that... it doesn't feel right somehow, is it a standard practice to use the HOST mode in Docker containers? Any caveats on that?
--
UPDATE:
I managed to make it work after disabling iptables:
service iptables stop
However, I still don't get what is going on. The info below should be relevant to understand what is going on:
ifconfig
[root#mydockervm2 myuser]# ifconfig | grep -A 1 flags
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
--
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.21.18 netmask 255.255.255.0 broadcast 10.1.21.255
--
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 9.32.145.99 netmask 255.255.255.0 broadcast 9.32.148.255
--
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
--
veth8dbab2f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::3815:67ff:fe9b:88e9 prefixlen 64 scopeid 0x20<link>
--
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
netstat
[root#mydockervm2 myuser]# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 9.32.145.1 0.0.0.0 UG 0 0 0 eth0
9.32.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
10.1.21.0 0.0.0.0 255.255.255.0 U 0 0 0 ens34
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ens34
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
filters
[root#mydockervm2 myuser]# iptables -t filter -vS
-P INPUT ACCEPT -c 169 106311
-P FORWARD ACCEPT -c 0 0
-P OUTPUT ACCEPT -c 110 13426
-N DOCKER
-N DOCKER-ISOLATION
-A FORWARD -c 0 0 -j DOCKER-ISOLATION
-A FORWARD -o docker0 -c 0 0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -c 0 0 -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -c 0 0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -c 0 0 -j ACCEPT
-A FORWARD -m physdev --physdev-is-bridged -c 0 0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -c 0 0 -j ACCEPT
-A DOCKER-ISOLATION -c 0 0 -j RETURN
nat
[root#mydockervm2 myuser]# iptables -t nat -vS
-P PREROUTING ACCEPT -c 28 4818
-P INPUT ACCEPT -c 28 4818
-P OUTPUT ACCEPT -c 8 572
-P POSTROUTING ACCEPT -c 8 572
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -c 2 98 -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -c 0 0 -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 8080 -c 0 0 -j MASQUERADE
-A DOCKER -i docker0 -c 0 0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -c 0 0 -j DNAT --to-destination 172.17.0.2:8080
Thoughts?
First, rule out the obvious and make sure that hosts on the other networks know how to route to your machine to reach the container network. For that, check
netstat -nr
on the source host and make sure that your docker subnet is listed with your docker host as the gateway, or that the default router handling the traffic upstream knows about your host.
If traffic is getting routed but blocked, then you're getting into forwarding and iptables. For forwarding, the following should show a 1:
cat /proc/sys/net/ipv4/ip_forward
Make sure your local host shows a route for the bridges to your container networks with the same netstat command, there should be a line for the docker0 interface and your docker subnet as the destination:
netstat -nr
For iptables, check to see if there are any interface specific nat or filter rules that need to be adjusted:
iptables -t filter -vS
iptables -t nat -vS
If your forward rule defaults to DROP instead of ACCEPT, you may want to add some logging, or just change the default to accept traffic if you believe it can be trusted (e.g. the host is behind another firewall).
This all being said, advertising ports directly on the host is a fairly common practice with containers. For the private stuff, you can setup multiple containers isolated on their internal network that can talk to each other, but no other containers, and you only expose the ports that are truly open to the rest of the world on the host with the -p flag to the run (or ports option in docker-compose).

Unable to connect to imap via telnet remotely

If I try:
telnet localhost 143
I can get access to imap
If I try
telnet server.name 143
I get
telnet: Unable to connect to remote host: Connection timed out
See output of my netstat.
netstat --numeric-ports -l | grep 143
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
What does the above output mean?
Am at my wits end cannot get imap to work remotely, works perfectly with webmail on the server.
I'm accessing the server from laptop terminal remotely, and locally for localhost connection
The output you quote:
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
means that you have a program (presumably your imap server) listening on port 143 via tcp (on both IPv4 and IPv6). And the "0.0.0.0" part means it should accept connections from any source. (If it said "127.0.0.1:143" for the local address, it would mean that only local connections would be accepted.)
So, since it looks like you have the serer listening correctly, I'd first check that server.name actually resolves to the correct IP address. Can you contact any other services on that server to make sure that part works?
Assuming that that works, then the next thing I'd check would be your firewall. You might look at http://www.cyberciti.biz/faq/howto-display-linux-iptables-loaded-rules/ but you could probably start by just running:
sudo iptables -L -v
On my machine which has no firewall rules I get this:
$ sudo iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
If you get something much different, the I'd take a closer look to see if that's blocking your traffic.

Resources