K8s pods unable to reach external VM via internal IP - azure

I am migrating to Azure platform from GCP. I have a k8s cluster that needs to talk to external Cassandra cluster using internal IP(s), in the same Azure region but different VNET. I have the VNET(s) peered. I can reach the Cassandra cluster from the K8s nodes and vice versa but cannot reach them from the pods.
This seems to be some Azure networking issue. I have opened up firewall rules for the pods to reach Cassandra but with no luck. How best should I solve this?

Because Azure can't find your private IP address of your pods. We can use Azure route table to connect them.
Here is my test, two resource group, one for k8s and another one for a signal VM.
Here is the information about pods:
root#k8s-master-CA9C4E39-0:~# kubectl get pods --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
influxdb 1/1 Running 0 59m 10.244.1.166 k8s-agent-ca9c4e39-0
my-nginx-858393261-jrz15 1/1 Running 0 1h 10.244.1.63 k8s-agent-ca9c4e39-0
my-nginx-858393261-wbpl6 1/1 Running 0 1h 10.244.1.62 k8s-agent-ca9c4e39-0
nginx 1/1 Running 0 52m 10.244.1.179 k8s-agent-ca9c4e39-0
nginx3 1/1 Running 0 43m 10.244.1.198 k8s-agent-ca9c4e39-0
The information about K8s agent and master :
The information about the signal VM:
By default, we can't use 172.16.0.4 to ping 10.244.1.0/24. We should add an Azure route table, then we can ping that pod IP address:
Here is my result:
root#jasonvm2:~# ping 10.244.1.166
PING 10.244.1.166 (10.244.1.166) 56(84) bytes of data.
64 bytes from 10.244.1.166: icmp_seq=1 ttl=63 time=2.61 ms
64 bytes from 10.244.1.166: icmp_seq=2 ttl=63 time=1.42 ms
--- 10.244.1.166 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.424/2.019/2.614/0.595 ms
root#jasonvm2:~# ping 10.244.1.166
PING 10.244.1.166 (10.244.1.166) 56(84) bytes of data.
64 bytes from 10.244.1.166: icmp_seq=1 ttl=63 time=2.56 ms
64 bytes from 10.244.1.166: icmp_seq=2 ttl=63 time=1.10 ms
^C
--- 10.244.1.166 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.102/1.833/2.564/0.731 ms
root#jasonvm2:~# ping 10.244.1.63
PING 10.244.1.63 (10.244.1.63) 56(84) bytes of data.
64 bytes from 10.244.1.63: icmp_seq=1 ttl=63 time=2.89 ms
64 bytes from 10.244.1.63: icmp_seq=2 ttl=63 time=2.27 ms
--- 10.244.1.63 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 2.271/2.581/2.892/0.314 ms
About Azure route table, please refer to this link.

Related

docker pull fails with error "Error while pulling image: connection reset by peer"

docker pull rhel7:7.3
Pulling repository docker.io/library/rhel7
Error while pulling image: Get
https://index.docker.io/v1/repositories/library/rhel7/images: read tcp
X.X.X.X:33074->52.44.135.200:443: read: connection reset by peer
[Both docker run -it & docker pull is giving the same error]
Hi all,
I am trying to pull a docker image and seeing the above mentioned error.
My Linux server is behind a proxy.
I tried topi ng google.com to ensure I have connectivity to internet and that worked as well.
ping google.com
PING google.com (172.217.3.174) 56(84) bytes of data.
64 bytes from sea15s11-in-f14.1e100.net (172.217.3.174): icmp_seq=1 ttl=53 time=16.1 ms
64 bytes from sea15s11-in-f14.1e100.net (172.217.3.174): icmp_seq=2 ttl=53 time=15.9 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
Any suggestions to fix this issue please?

How to enable outbound connections for a Docker container?

I have an ASP.NET Core application that is hosted in the Docker cloud (cloud provider is Azure). The application uses Hangfire to run recurring jobs in the background, and one of the jobs needs to request data from an external REST API. I noticed that any attempt at outbound communication fails, and I would like to know, how I can enable it.
The deployment consists of some other containers, whereby linked containers (services) can communicate with no problem. There is no special network configuration; the default "bridge" mode is used. Do I need to configure something in the container´s image, or do I need to make changes to the network settings... I have no clue.
There is no special network configuration; the default "bridge" mode
is used.
According to your description, it seems you are using a VM and run docker on it.
If you want to access this docker from Internet, we should map docker port to local port, for example:
docker run -d -p 80:80 my_image service nginx start
After we map port 80 to this VM, we should add inbound rules to Azure network security group(NSG), we can follow this article to add it.
Also we should add port 80 to OS filewall inbound rules.
Update:
Sorry for misunderstand.
Here is my test, I install docker on Azure VM(Ubuntu 16), then create a centos docker, like this:
root#jasonvms:~# docker run -i -t centos bash
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
d9aaf4d82f24: Pull complete
Digest: sha256:4565fe2dd7f4770e825d4bd9c761a81b26e49cc9e3c9631c58cfc3188be9505a
Status: Downloaded newer image for centos:latest
[root#75f92bf5b499 /]# ping www.google.com
PING www.google.com (172.217.3.100) 56(84) bytes of data.
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=1 ttl=47 time=7.93 ms
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=2 ttl=47 time=8.13 ms
64 bytes from lga34s18-in-f4.1e100.net (172.217.3.100): icmp_seq=3 ttl=47 time=8.15 ms
^C
--- www.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 7.939/8.076/8.153/0.121 ms
[root#75f92bf5b499 /]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=1.88 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=1.89 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=51 time=1.86 ms
c64 bytes from 8.8.8.8: icmp_seq=4 ttl=51 time=1.87 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=51 time=1.78 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=51 time=1.87 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 1.783/1.861/1.894/0.061 ms
[root#75f92bf5b499 /]#
I find it can community with the internet, could you please show me more information about your issue?
if you are using standalone instance then make changes in network_security group of instance and allow outbound rules,
if using ACS follow below link
https://learn.microsoft.com/en-us/azure/container-service/dcos-swarm/container-service-enable-public-access

Why I can not connect other pod in one pod of the kubernetes cluster?

I have done all kubernetes DNS service config,and test it running ok. but how could I access the pod from serviceName(DNS domain name)?
pod list:
[root#localhost ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
bj001-y1o2i 3/3 Running 12 20h
dns-itc8d 3/3 Running 18 1d
nginx-rc5bh 1/1 Running 1 15h
service list:
[root#localhost ~]# kb get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
bj001 10.254.54.162 172.16.2.51 30101/TCP,30102/TCP app=bj001 1d
dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d
kubernetes 10.254.0.1 <none> 443/TCP <none> 8d
nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 20h
endpoints:
[root#localhost ~]# kb get endpoints
NAME ENDPOINTS AGE
bj001 172.17.12.3:18010,172.17.12.3:3306 1d
dns 172.17.87.3:53,172.17.87.3:53 1d
kubernetes 172.16.2.50:6443 8d
nginx 172.17.12.2:80 20h
in nginx pod, I can ping pod bj001,and find the DNS name,but can not ping dns domain name.
like this:
[root#localhost ~]# kb exec -it nginx-rc5bh sh
sh-4.2# nslookup bj001
Server: 10.254.0.2
Address: 10.254.0.2#53
Name: bj001.default.svc.cluster.local
Address: 10.254.54.162
sh-4.2# ping 172.17.12.3
PING 172.17.12.3 (172.17.12.3) 56(84) bytes of data.
64 bytes from 172.17.12.3: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 172.17.12.3: icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from 172.17.12.3: icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from 172.17.12.3: icmp_seq=4 ttl=64 time=0.105 ms
^C
--- 172.17.12.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.073/0.087/0.105/0.011 ms
sh-4.2# ping bj001
PING bj001.default.svc.cluster.local (10.254.54.162) 56(84) bytes of data.
^C
--- bj001.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
I have found my fault.
kubernetes use iptables to transmit with different pod. So we should do that all we used port should be seted in the {spec.ports}, like my issue, the 18010 port must be opened.
[root#localhost ~]# kb get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
bj001 10.254.91.218 <none> 3306/TCP,18010/TCP app=bj001 41m
dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d
kubernetes 10.254.0.1 <none> 443/TCP <none> 8d
nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 1d

Google Cloud DNS type A not working

I've added my nameservers to my registrar:
NS-CLOUD-E1.GOOGLEDOMAINS.COM
NS-CLOUD-E2.GOOGLEDOMAINS.COM
NS-CLOUD-E3.GOOGLEDOMAINS.COM
NS-CLOUD-E4.GOOGLEDOMAINS.COM
Updated 1/12/2016
Updated Google Cloud DNS:
DNS Name Type TTL Data
dvotedfan.com. A 300 130.211.8.93
www.dvotedfan.com. A 300 130.211.8.93
dvotedfan.com. NS 60 ns-cloud-e1.googledomains.com.
ns-cloud-e2.googledomains.com.
ns-cloud-e3.googledomains.com.
ns-cloud-e4.googledomains.com.
Waited one day then:
I've check DNS and it has been resolved:
http://network-tools.com/default.asp?prog=dnsrec&host=dvotedfan.com
But if I ping the domain I get an error (Unknown error: 1214):
http://network-tools.com/default.asp?prog=ping&host=dvotedfan.com
I can access via my IP (Load balancer) 130.211.8.93 but not from domain name (dvotedfan.com)
It could be network-tools.com is having some problems. It looks like it works to me.
stephen#stackoverflow:~$ ping -c 5 dvotedfan.com
PING dvotedfan.com (130.211.8.93) 56(84) bytes of data.
64 bytes from 93.8.211.130.bc.googleusercontent.com (130.211.8.93): icmp_seq=1 ttl=57 time=1.61 ms
64 bytes from 93.8.211.130.bc.googleusercontent.com (130.211.8.93): icmp_seq=2 ttl=57 time=1.61 ms
64 bytes from 93.8.211.130.bc.googleusercontent.com (130.211.8.93): icmp_seq=3 ttl=57 time=1.64 ms
64 bytes from 93.8.211.130.bc.googleusercontent.com (130.211.8.93): icmp_seq=4 ttl=57 time=1.65 ms
64 bytes from 93.8.211.130.bc.googleusercontent.com (130.211.8.93): icmp_seq=5 ttl=57 time=1.81 ms
--- dvotedfan.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.612/1.668/1.814/0.083 ms

Can not access kubernetes master from the container of pods according DNS

I use DNS in kubernetes. and test result like:
core#core-1-86 ~ $ kubectl exec busybox -- nslookup kubernetes
Server: 10.100.0.10
Address 1: 10.100.0.10
Name: kubernetes
Address 1: 10.100.0.1
And then I entried to busybox container, and ping kubernetes, like:
core#core-1-86 ~ $ kubectl exec -it busybox sh
/ # ping kubernetes
PING kubernetes (10.100.0.1): 56 data bytes
^C
--- kubernetes ping statistics ---
55 packets transmitted, 0 packets received, 100% packet loss
/ #
if I ping another ip , it ok!
/ # ping 10.12.1.85
PING 10.12.1.85 (10.12.1.85): 56 data bytes
64 bytes from 10.12.1.85: seq=0 ttl=63 time=0.262 ms
64 bytes from 10.12.1.85: seq=1 ttl=63 time=0.218 ms
^C
--- 10.12.1.85 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.218/0.240/0.262 ms
/ #
who can help me and tell me why?
The kubernetes service is a virtual IP and doesn't currently handle ICMP requests (see #2259). You should be able to verify connectivity to the kubernetes service using a TCP connection, e.g. curl https://kubernetes/.

Resources