I maintain a Kubernetes cluster. The nodes are in an intranet with 10.0.0.0/8 IPs, and the pod network range is 192.168.0.0/16.
The problem is, some of the worker nodes have unreachable routes to pod networks on other nodes, like:
0.0.0.0 10.a.b.65 0.0.0.0 UG 0 0 0 eth0
10.a.b.64 0.0.0.0 255.255.255.192 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.20.0 - 255.255.255.192 ! 0 - 0 -
192.168.21.128 - 255.255.255.192 ! 0 - 0 -
192.168.22.64 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.22.66 0.0.0.0 255.255.255.255 UH 0 0 0 cali3859982c59e
192.168.24.128 - 255.255.255.192 ! 0 - 0 -
192.168.39.192 - 255.255.255.192 ! 0 - 0 -
192.168.49.192 - 255.255.255.192 ! 0 - 0 -
...
192.168.208.128 - 255.255.255.192 ! 0 - 0 -
192.168.228.128 10.14.170.104 255.255.255.192 UG 0 0 0 tunl0
When I docker exec into the Calico container, the connections to other nodes are reported unreachable in bird:
192.168.108.64/26 unreachable [Mesh_10_15_39_59 08:04:59 from 10.a.a.a] * (100/-) [i]
192.168.112.128/26 unreachable [Mesh_10_204_89_220 08:04:58 from 10.b.b.b] * (100/-) [i]
192.168.95.192/26 unreachable [Mesh_10_204_30_35 08:04:59 from 10.c.c.c] * (100/-) [i]
192.168.39.192/26 unreachable [Mesh_10_204_89_152 08:04:59 from 10.d.d.d] * (100/-) [i]
...
As a result, the pods on the broken nodes almost can't access anything in the cluster.
I've tried to restart a broken node, remove it from cluster, run kubeadm reset, and re-join it. But all remained the same.
What's the possible cause, and how should I fix this? Many thanks in advance.
default ip for cluster services like coredns , etc ... is 10.96.0.1 and range 10.0.0.0/8
you should to change the node ip for nodes on the cluster and rejoin them.
or
change the default route from eth0 to tunl0
it depends on your cni network.
if you use calico , give over the network rule and route to calico project.
Well, I upgraded Docker (19.03.14), Kubernetes (1.19.4) and Calico (3.17.0).
Then I re-created the cluster.
Now it works well.
Related
I have created an azure vpn gateway and my mac and ubuntu 18.04 have a p2s connection.
However, both machines do not receive packat when pinging each other.
How can I get them to arrive?
Do I need to configure root forwarding?
■ azure vnet subnet
- FrontEnd
- 10.1.0.0/24
- GatewaySubnet
- 10.1.255.0/27
- address space
- 10.0.0.0/16
- 10.1.0.0/16
■ azure vnet gateway
- address pool
- 172.16.201.0/24
- tunnel type
- openVPN
■ mac
- wifi ssid
- A
- private ip address network interface
- eth0
- 10.2.59.83
- utun3(vpn)
- inet 172.16.201.4 --> 172.16.201.4
- netstat -rn
172.16/24 utun3 USc utun3
172.16.201/24 link#17 UCS utun3
172.16.201.4 172.16.201.4 UH utun3
192.168.0/23 utun3 USc utun3
■ ubuntu 18.04
- wifi ssid
- B
- private ip address network interface
- wlan0
- 192.168.100.208
- tun0(vpn)
- 172.16.201.3/24
- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 20600 0 0 wlan0
default _gateway 0.0.0.0 UG 32766 0 0 l4tbr0
10.0.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
10.1.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 l4tbr0
172.16.201.0 0.0.0.0 255.255.0.0 U 50 0 0 tun0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.55.0 0.0.0.0 255.255.255.0 U 0 0 0 l4tbr0
192.168.100.0 0.0.0.0 255.255.255.0 U 600 0 0 wlan0
I have successfully connected to the Fortinet Vpn using Openfortivpn but my traffic remains being routed int the same way.
I am using Ubuntu 18.04.1 LTS, and when I connect via the terminal I get the following log messages:
INFO: Connected to gateway.
INFO: Authenticated.
INFO: Remote gateway has allocated a VPN.
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Interface ppp0 is UP.
INFO: Setting new routes...
INFO: Adding VPN nameservers...
INFO: Tunnel is up and running.
For some reason there seem to be multiple Got Addresses logs, that might be why my routing table looks different from the one I have found online:
> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref
> Use Iface default _gateway 0.0.0.0 UG 600 0 0 wlp3s0
> 1dot1dot1dot1.c 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
> link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr0
> 172.16.0.0 telix-ThinkPad- 255.255.0.0 UG 0 0 0 ppp0
> 172.31.0.0 telix-ThinkPad- 255.255.255.248 UG 0 0 0 ppp0
> 172.31.1.0 telix-ThinkPad- 255.255.255.240 UG 0 0 0 ppp0
> 192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
> 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
> 192.168.229.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.230.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.231.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 206.165.205.130 _gateway 255.255.255.255 UGH 0 0 0 wlp3s0
When I check the traffic using sudo tcp -i ppp0 I get nothing, so this has lead me to believe that there must be routing table problem.
Any help would be greatly appreciated!
Your pppd probably did not add route to direct traffic to the VPN interface (say, ppp0). You can check the name of the VPN interface by this cmd ifconfig. After successfully running the command/GUI to connect to the VPN, you will see an extra interface (normally ppp0). Now, you can try running this command to force all traffic of your machine go through the VPN interface:
sudo route add default ppp0
Note that, this command add a temporary route to the routing table. As soon as you turn off your VPN connection, the route is deleted. Every time you connect to the VPN server, you will need to run the above command again.
Hope this help.
I've setup a linux VM in Azure. I've added incoming port access to the current listening port on Apache. I've also done a curl localhost on the VM and see the apache html text. I hit the public IP of the VM and get nothing. Any ideas?
According to your description, please check those settings:
1. Please check Azure VM's NSG settings, make sure we have add port to inbound rules:
2. Vnet-->subnet's security group settings:
3. Check which port apache listening on:
netstat -ant
root#ubuntu:~# netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 10.1.0.4:55870 191.237.32.134:443 TIME_WAIT
tcp 0 0 10.1.0.4:55874 191.237.32.134:443 TIME_WAIT
tcp 0 0 10.1.0.4:55876 191.237.32.134:443 TIME_WAIT
tcp 0 0 10.1.0.4:55868 191.237.32.134:443 TIME_WAIT
tcp 0 0 10.1.0.4:57772 168.63.129.16:80 TIME_WAIT
tcp 0 0 10.1.0.4:57766 168.63.129.16:80 TIME_WAIT
tcp 0 36 10.1.0.4:22 167.220.255.8:53651 ESTABLISHED
tcp6 0 0 :::80 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
By the way, for test please disable ufw with this command ufw disable, then try to access the public IP address.
Update:
I follow those steps to modify apache default port:
1.Modify ports.conf, change port 80 to 80:
root#ubuntu:/etc/apache2# vi ports.conf
Listen 90
<IfModule ssl_module>
Listen 443
2.Add ServerName localhost to /etc/apache2/apache2.conf
root#ubuntu:/etc/apache2# vi /etc/apache2/apache2.conf
# Global configuration
#
ServerName localhost
3.Modify default port in /etc/apache2/sites-enabled/000-default.conf
root#ubuntu:/etc/apache2# vi /etc/apache2/sites-enabled/000-default.conf
<VirtualHost *:90>
4.Add inbound rule to Network Security Group:
By the way, to troubleshoot this issue, we can follow those steps:
1.Login this VM and use curl to test apache2:
curl localhost:90
2.Use your PC to telnet this VM's public IP and port 90
telnet xx.xx.xx.xx 90
If you can't telnet this port, please check your NSG settings and subnet's security group settings.
Here is my result, it works for me:
root#ubuntu:/etc/apache2# netstat -ant | grep 90
tcp6 0 0 :::90 :::* LISTEN
We have a virtualized solution in-place. I am trying to port the environment into docker. Now the problem I am facing is that like the way, docker uses the concept of network-namespace to create virtual ethernet interfaces and move the other end of my network-pipe to docker container, in the same way, virtualized environment running inside docker container is creating a bridge and clubbing all the virtual ethernet interfaces created for each virtualized host.
Now I am able to ping bridge IP created inside the docker. But ping is not going for the ip's created inside the network namespace for each virtual host.
This diagram will help in drawing the whole network stack:
Host:
bridge: docker0(172.17.0.1) ----> veth(some name given by docker)
Container:
eth3: 172.17.0.2(renamed purposely)
bridge:eth2(172.16.69.62) ---> eth0_host1 ----- eth1(host1:172.16.69.33) ( namepsace:net_host1)
---> eth0_host2 ----- eth1(host2:172.16.69.34)
namespace:net_host2)
Now, the problem is ping from the host machine to bridge: 172.16.69.62 is going on,
but it doesn't work if I try to ping to 172.16.69.33
Below are the routing tables for docker container:
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth3
172.16.69.32 0.0.0.0 255.255.255.224 U 0 0 0 eth2
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3
For host:
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 enp0s8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
Proto Recv-Q Send-Q Local Address Foreign Address State PID
tcp 0 0 ip:11080 0.0:* LISTEN -
tcp 0 0 ip:5070 0.0:* LISTEN -
tcp 0 0 ip:5071 0.0:* LISTEN -
tcp 0 0 **127.0.0.1:5072** 0.0:* LISTEN -
tcp 0 0 ip:11443 0.0:* LISTEN -
tcp 0 0 **127.0.0.1:11444** 0.0:* LISTEN -
Not able to access port (11444 & 5072) externally.
Only working on Local Host not remotely.
We are using Ubuntu on Google Compute Engine.
Firewall rules Added
Just checking - have you also configured the firewall? By default, the ports may be blocked by the firewall. You can configure it to enable ports via either the Developer Console, or with the gcloud command line tool.
Some extra information about firewall's on Google Compute Engine can be found at:
https://cloud.google.com/compute/docs/networking?hl=en#firewalls
As the netstat output shows, your services listening on port 11444 and 5072 are bound to localhost (127.0.0.1) which means they only accept connections on the local loop interface. Change the binding IP address on your service configuration to 0.0.0.0.