How to troubleshoot udp port unreachable between busybox and kube-dns - dns

How do I troubleshoot this problem?
I have a manual setup of Kubernetes which is using as cluster internal DNS, coredns. A busybox pod has been deployed to do a nslookup on kubernetes.default.
The lookup fails with the message nslookup: can't resolve 'kubernetes.default. To get more insight what is happening during the lookup I checked the network traffic with tcpdump going out from my busybox pod. This shows that my pod can connect successfully to the coredns pod but the coredns pod will fail to connect back:
10:25:53.328153 IP 10.200.0.29.49598 > 10.32.0.10.domain: 2+ PTR? 10.0.32.10.in-addr.arpa. (41)
10:25:53.328393 IP 10.200.0.30.domain > 10.200.0.29.49598: 2* 1/0/0 PTR kube-dns.kube-system.svc.cluster.local. (93)
10:25:53.328410 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 49598 unreachable, length 129
10:25:58.328516 IP 10.200.0.29.50899 > 10.32.0.10.domain: 3+ PTR? 10.0.32.10.in-addr.arpa. (41)
10:25:58.328738 IP 10.200.0.30.domain > 10.200.0.29.50899: 3* 1/0/0 PTR kube-dns.kube-system.svc.cluster.local. (93)
10:25:58.328752 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 50899 unreachable, length 129
10:25:58.343205 ARP, Request who-has 10.200.0.1 tell 10.200.0.29, length 28
10:25:58.343217 ARP, Reply 10.200.0.1 is-at 0a:58:0a:c8:00:01 (oui Unknown), length 28
10:25:58.351250 ARP, Request who-has 10.200.0.29 tell 10.200.0.30, length 28
10:25:58.351250 ARP, Request who-has 10.200.0.30 tell 10.200.0.29, length 28
10:25:58.351261 ARP, Reply 10.200.0.29 is-at 0a:58:0a:c8:00:1d (oui Unknown), length 28
10:25:58.351262 ARP, Reply 10.200.0.30 is-at 0a:58:0a:c8:00:1e (oui Unknown), length 28
10:26:03.331409 IP 10.200.0.29.45823 > 10.32.0.10.domain: 4+ PTR? 10.0.32.10.in-addr.arpa. (41)
10:26:03.331618 IP 10.200.0.30.domain > 10.200.0.29.45823: 4* 1/0/0 PTR kube-dns.kube-system.svc.cluster.local. (93)
10:26:03.331631 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 45823 unreachable, length 129
10:26:08.348259 IP 10.200.0.29.43332 > 10.32.0.10.domain: 5+ PTR? 10.0.32.10.in-addr.arpa. (41)
10:26:08.348492 IP 10.200.0.30.domain > 10.200.0.29.43332: 5* 1/0/0 PTR kube-dns.kube-system.svc.cluster.local. (93)
10:26:08.348506 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 43332 unreachable, length 129
10:26:13.353491 IP 10.200.0.29.55715 > 10.32.0.10.domain: 6+ AAAA? kubernetes.default. (36)
10:26:13.354955 IP 10.200.0.30.domain > 10.200.0.29.55715: 6 NXDomain* 0/0/0 (36)
10:26:13.354971 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 55715 unreachable, length 72
10:26:18.354285 IP 10.200.0.29.57421 > 10.32.0.10.domain: 7+ AAAA? kubernetes.default. (36)
10:26:18.355533 IP 10.200.0.30.domain > 10.200.0.29.57421: 7 NXDomain* 0/0/0 (36)
10:26:18.355550 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 57421 unreachable, length 72
10:26:23.359405 IP 10.200.0.29.44332 > 10.32.0.10.domain: 8+ AAAA? kubernetes.default. (36)
10:26:23.361155 IP 10.200.0.30.domain > 10.200.0.29.44332: 8 NXDomain* 0/0/0 (36)
10:26:23.361171 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 44332 unreachable, length 72
10:26:23.367220 ARP, Request who-has 10.200.0.30 tell 10.200.0.29, length 28
10:26:23.367232 ARP, Reply 10.200.0.30 is-at 0a:58:0a:c8:00:1e (oui Unknown), length 28
10:26:23.370352 ARP, Request who-has 10.200.0.1 tell 10.200.0.29, length 28
10:26:23.370363 ARP, Reply 10.200.0.1 is-at 0a:58:0a:c8:00:01 (oui Unknown), length 28
10:26:28.367698 IP 10.200.0.29.48446 > 10.32.0.10.domain: 9+ AAAA? kubernetes.default. (36)
10:26:28.369133 IP 10.200.0.30.domain > 10.200.0.29.48446: 9 NXDomain* 0/0/0 (36)
10:26:28.369149 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 48446 unreachable, length 72
10:26:33.381266 IP 10.200.0.29.50714 > 10.32.0.10.domain: 10+ A? kubernetes.default. (36)
10:26:33.382745 IP 10.200.0.30.domain > 10.200.0.29.50714: 10 NXDomain* 0/0/0 (36)
10:26:33.382762 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 50714 unreachable, length 72
10:26:38.386288 IP 10.200.0.29.39198 > 10.32.0.10.domain: 11+ A? kubernetes.default. (36)
10:26:38.388635 IP 10.200.0.30.domain > 10.200.0.29.39198: 11 NXDomain* 0/0/0 (36)
10:26:38.388658 IP 10.200.0.29 > 10.200.0.30: ICMP 10.200.0.29 udp port 39198 unreachable, length 72
10:26:38.395241 ARP, Request who-has 10.200.0.29 tell 10.200.0.30, length 28
10:26:38.395248 ARP, Reply 10.200.0.29 is-at 0a:58:0a:c8:00:1d (oui Unknown), length 28
10:26:43.389355 IP 10.200.0.29.46495 > 10.32.0.10.domain: 12+ A? kubernetes.default. (36)
10:26:43.391522 IP 10.200.0.30.domain > 10.200.0.29.46495: 12 NXDomain* 0/0/0 (36)
10:26:43.391539 IP 10.200.0.2
Cluster Infrastructure
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deploy/busybox 1 1 1 1 1h
kube-system deploy/coredns 1 1 1 1 17h
NAMESPACE NAME DESIRED CURRENT READY AGE
default rs/busybox-56db8bd9d7 1 1 1 1h
kube-system rs/coredns-b8d4b46c8 1 1 1 17h
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deploy/busybox 1 1 1 1 1h
kube-system deploy/coredns 1 1 1 1 17h
NAMESPACE NAME DESIRED CURRENT READY AGE
default rs/busybox-56db8bd9d7 1 1 1 1h
kube-system rs/coredns-b8d4b46c8 1 1 1 17h
NAMESPACE NAME READY STATUS RESTARTS AGE
default po/busybox-56db8bd9d7-fv7np 1/1 Running 2 1h
kube-system po/coredns-b8d4b46c8-6tg5d 1/1 Running 2 17h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 22h
kube-system svc/kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP,9153/TCP 17h
Busybox IP
kubectl describe pod busybox-56db8bd9d7-fv7np | grep IP
IP: 10.200.0.29
EndPoints IP to see DNS IP and port
kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.0.218:6443 22h
kube-system kube-controller-manager <none> 22h
kube-system kube-dns 10.200.0.30:9153,10.200.0.30:53,10.200.0.30:53 2h
kube-system kube-scheduler <none> 22h

Debugging this requires a couple of steps to make sure you have all the ground coveres.
Start with launching a pod (can be busybox or whatever) that will have some tool like host, dig or nslookup.
Next, identify the POD IP of the coredns. With that, poceed to say host kubernetes.default.svc.cluster.local <podIP>. If that does not work, there is something wrong with pod-to-pod connectivity in your cluster.
If it does, try host kubernetes.default.svc.cluster.local <service IP> with the service IP of your dns service. If it does not work, then it looks like kube-proxy is not doing it's work properly or something is messed up on iptables level.
If it worked, take a look at /etc/resolv.conf in pod and kubelet --cluster-dns flag value.
sidenote: all of the above is assuming your coredns it self works fine in the first place

Related

How do I get a ping request to ubuntu?

I have created an azure vpn gateway and my mac and ubuntu 18.04 have a p2s connection.
However, both machines do not receive packat when pinging each other.
How can I get them to arrive?
Do I need to configure root forwarding?
■ azure vnet subnet
- FrontEnd
- 10.1.0.0/24
- GatewaySubnet
- 10.1.255.0/27
- address space
- 10.0.0.0/16
- 10.1.0.0/16
■ azure vnet gateway
- address pool
- 172.16.201.0/24
- tunnel type
- openVPN
■ mac
- wifi ssid
- A
- private ip address network interface
- eth0
- 10.2.59.83
- utun3(vpn)
- inet 172.16.201.4 --> 172.16.201.4
- netstat -rn
172.16/24 utun3 USc utun3
172.16.201/24 link#17 UCS utun3
172.16.201.4 172.16.201.4 UH utun3
192.168.0/23 utun3 USc utun3
■ ubuntu 18.04
- wifi ssid
- B
- private ip address network interface
- wlan0
- 192.168.100.208
- tun0(vpn)
- 172.16.201.3/24
- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 20600 0 0 wlan0
default _gateway 0.0.0.0 UG 32766 0 0 l4tbr0
10.0.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
10.1.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 l4tbr0
172.16.201.0 0.0.0.0 255.255.0.0 U 50 0 0 tun0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.55.0 0.0.0.0 255.255.255.0 U 0 0 0 l4tbr0
192.168.100.0 0.0.0.0 255.255.255.0 U 600 0 0 wlan0

Is routing table directing traffic through vpn?

I have successfully connected to the Fortinet Vpn using Openfortivpn but my traffic remains being routed int the same way.
I am using Ubuntu 18.04.1 LTS, and when I connect via the terminal I get the following log messages:
INFO: Connected to gateway.
INFO: Authenticated.
INFO: Remote gateway has allocated a VPN.
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Interface ppp0 is UP.
INFO: Setting new routes...
INFO: Adding VPN nameservers...
INFO: Tunnel is up and running.
For some reason there seem to be multiple Got Addresses logs, that might be why my routing table looks different from the one I have found online:
> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref
> Use Iface default _gateway 0.0.0.0 UG 600 0 0 wlp3s0
> 1dot1dot1dot1.c 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
> link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr0
> 172.16.0.0 telix-ThinkPad- 255.255.0.0 UG 0 0 0 ppp0
> 172.31.0.0 telix-ThinkPad- 255.255.255.248 UG 0 0 0 ppp0
> 172.31.1.0 telix-ThinkPad- 255.255.255.240 UG 0 0 0 ppp0
> 192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
> 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
> 192.168.229.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.230.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.231.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 206.165.205.130 _gateway 255.255.255.255 UGH 0 0 0 wlp3s0
When I check the traffic using sudo tcp -i ppp0 I get nothing, so this has lead me to believe that there must be routing table problem.
Any help would be greatly appreciated!
Your pppd probably did not add route to direct traffic to the VPN interface (say, ppp0). You can check the name of the VPN interface by this cmd ifconfig. After successfully running the command/GUI to connect to the VPN, you will see an extra interface (normally ppp0). Now, you can try running this command to force all traffic of your machine go through the VPN interface:
sudo route add default ppp0
Note that, this command add a temporary route to the routing table. As soon as you turn off your VPN connection, the route is deleted. Every time you connect to the VPN server, you will need to run the above command again.
Hope this help.

Why I can not connect other pod in one pod of the kubernetes cluster?

I have done all kubernetes DNS service config,and test it running ok. but how could I access the pod from serviceName(DNS domain name)?
pod list:
[root#localhost ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
bj001-y1o2i 3/3 Running 12 20h
dns-itc8d 3/3 Running 18 1d
nginx-rc5bh 1/1 Running 1 15h
service list:
[root#localhost ~]# kb get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
bj001 10.254.54.162 172.16.2.51 30101/TCP,30102/TCP app=bj001 1d
dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d
kubernetes 10.254.0.1 <none> 443/TCP <none> 8d
nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 20h
endpoints:
[root#localhost ~]# kb get endpoints
NAME ENDPOINTS AGE
bj001 172.17.12.3:18010,172.17.12.3:3306 1d
dns 172.17.87.3:53,172.17.87.3:53 1d
kubernetes 172.16.2.50:6443 8d
nginx 172.17.12.2:80 20h
in nginx pod, I can ping pod bj001,and find the DNS name,but can not ping dns domain name.
like this:
[root#localhost ~]# kb exec -it nginx-rc5bh sh
sh-4.2# nslookup bj001
Server: 10.254.0.2
Address: 10.254.0.2#53
Name: bj001.default.svc.cluster.local
Address: 10.254.54.162
sh-4.2# ping 172.17.12.3
PING 172.17.12.3 (172.17.12.3) 56(84) bytes of data.
64 bytes from 172.17.12.3: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 172.17.12.3: icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from 172.17.12.3: icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from 172.17.12.3: icmp_seq=4 ttl=64 time=0.105 ms
^C
--- 172.17.12.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.073/0.087/0.105/0.011 ms
sh-4.2# ping bj001
PING bj001.default.svc.cluster.local (10.254.54.162) 56(84) bytes of data.
^C
--- bj001.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
I have found my fault.
kubernetes use iptables to transmit with different pod. So we should do that all we used port should be seted in the {spec.ports}, like my issue, the 18010 port must be opened.
[root#localhost ~]# kb get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
bj001 10.254.91.218 <none> 3306/TCP,18010/TCP app=bj001 41m
dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d
kubernetes 10.254.0.1 <none> 443/TCP <none> 8d
nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 1d

Ping a virtualbox machine from the host machine shows "Destination Host Unreachable"

I don't know why but I can't ping a virtual machine node from the host. I have created a network:
vboxnet1:
IPv4 Address: 192.168.57.0
IPv4 Network Mask: 255.255.255.0
IPv6 Address: fe80:0000:0000:0000:0800:27ff:fe00:0000
IPv6 Network Mask Length: 64
Then I have created a virtual machine with 2 interfaces:
adapter 1: NAT
adapter 2: Host-only Adapter. Name: vboxnet1
Check "Cable Connected"
Then I have Installed CentOS 7 on VM.
edit: /etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
ONBOOT=yes
edit: /etc/sysconfig/network-scripts/ifcfg-eth1:
TYPE=Ethernet
IPADDR=192.168.57.111
NETMASK=255.255.255.0
BOOTPROTO=static
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth1
DEVICE=eth1
ONBOOT=yes
"ip addr" on VM shows that eth0 is 10.0.2.15/24 and eth1 is 192.168.57.111/24
"route -n" on host machine shows:
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 vboxnet0
192.168.57.0. 0.0.0.0 255.255.255.0 U 0 0 0 vboxnet1
Virtual machines can ping each other. Also, Virtual machines can ping the host machine but the host machine can't ping virtual machines.
Can somebody explain why it isn't working?
I used a bridge network because security isn't a concern in my setup.
Here is a summary of tutorial in the link from #ser99.sh
Select the virtual machine that you want to connect to your network:
Rightclick your virtual machine and select settings --> network settings --> bridge network:
Start up your virtual machine and select a suitable static IP address:
Verify that you have access to other computers:
If you want connect your host machine with guest machines, you can use "bridge network"
http://www.thegeeky.space/2015/06/how-to-set-and-run-bridge-virtual-network-on-CentOS-Kali-Linux-Windows-in-Virtualbox-with-practical-example.html

Why are UDP packets sent from default interface address instead of the address where the client packet is received?

For a long time I had troubles using several software (early versions of Teamspeak 3, netcat, openvpn) communicating using UDP protocol. Today I identified the problem.
The main goal for me was to use openvpn over udp which did not seem to work on my server which has multiple ip addresses (runs Ubuntu Server Kernel 3.2.0-35-generic).
Using following config:
# ifconfig -a
eth0 Link encap:Ethernet HWaddr 11:11:11:11:11:11
inet addr:1.1.1.240 Bcast:1.1.1.255 Mask:255.255.255.224
...
# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 1.1.1.240
broadcast 1.1.1.255
netmask 255.255.255.224
gateway 1.1.1.225
up ip addr add 1.1.1.249/27 dev eth0
down ip addr del 1.1.1.249/27 dev eth0
up ip addr add 2.2.2.59/29 dev eth0
down ip addr del 2.2.2.59/29 dev eth0
up route add -net 2.2.2.56 netmask 255.255.255.248 gw 2.2.2.57 eth0
# default route to access subnet
up route add -net 1.1.1.224 netmask 255.255.255.224 gw 1.1.1.225 eth0
Problem:
A simple tcpdump at the server reveals that udp packets (tested with netcat and openvpn) received at 2.2.2.59 are replied from 1.1.1.240 (client: 123.11.22.33)
13:55:30.253472 IP 123.11.22.33.54489 > 2.2.2.59.1223: UDP, length 5
13:55:36.826658 IP 1.1.1.240.1223 > 123.11.22.33.54489: UDP, length 5
Question:
Is this problem due to wrong configuration of the network interface or the application itself (OpenVPN, netcat)?
Is it possible for the/an application to listen on multiple ip addresses and reply from the interface address where it received the packet on UDP like it's doing when using TCP.
I know that you can bind applications for specific ip but that would not be the way to go.
I cannot see that this behaviour is due to the UDP protocol itself, since the application is possible to determine at which interface address the packet was received.
Specifically, openvpn has the --multihome option for handling this scenario correctly.

Resources