How linux forwards packets to k8s service [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have installed k8s on VM.
Now I have access to some k8s services from this VM, like
[root#vm ~]# netcat -vz 10.96.0.10 9153
kube-dns.kube-system.svc.cluster.local [10.96.0.10] 9153 open
10.96.0.10 is ClusterIP of kube-dns service.
My question is how Linux forwards requests to 10.96.0.10 to right destination?
I don't see any interfaces with 10.96.0.10 IP and any routing rules for 10.96.0.10 on VM.
[root#vm ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:00:00:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.104/24 brd 192.168.4.255 scope global dynamic noprefixroute ens3
valid_lft 33899sec preferred_lft 28499sec
inet6 fe80::ca7d:cdfe:42a3:75f/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:cd:1d:8a:77 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: tunl0#NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.0.219.64/32 brd 10.0.219.64 scope global tunl0
valid_lft forever preferred_lft forever
7: calib9d0c90540c#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
8: cali81206f5bf92#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
[root#vm ~]# ip route
default via 192.168.4.1 dev ens3 proto dhcp src 192.168.4.104 metric 202
10.0.189.64/26 via 192.168.4.107 dev tunl0 proto bird onlink
blackhole 10.0.219.64/26 proto bird
10.0.219.107 dev calib9d0c90540c scope link
10.0.219.108 dev cali81206f5bf92 scope link
10.0.235.128/26 via 192.168.4.105 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.4.0/24 dev ens3 proto dhcp scope link src 192.168.4.104 metric 202

kubelet manages iptables NAT rules to route the traffic to actual endpoints of a service. So the service IP is purely virtual and gets rewritten in a round-robin fashing across all endpoints of a service.

Related

Host can't ping VM Guest

I'm unable to ping my guest VM from my host.
Host: Win7
VirtualBox 5.2.6
Guest: CentOS 7
On VB I have my guest CentOS 7 network configured with 2 adapters.
Adapter 1 Attached to NAT
Adapter 2 Attached to Host-only Adapter (VirtualBox Host-Only Ethernet Adapter)
On CentOS 7
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:e3:8c:8f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 84885sec preferred_lft 84885sec
inet6 fe80::db4c:7b23:6a34:d832/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:0b:4f:5d brd ff:ff:ff:ff:ff:ff
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:99:20:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:99:20:1b brd ff:ff:ff:ff:ff:ff
I think the virbr0 192.168.122.1 is weird. I would have expected 192.168.56.XXX.
ifcfg-enp0s3
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=d36ec069-6fa5-429f-a89b-debf35c2346e
DEVICE=enp0s3
ONBOOT=yes
On my host Win 7 I ping 192.168.122.1 and get a connection timeout. Does anyone know what I'm missing?
virbr0 is the virtual network adapter created by cent OS. This is created due to installation type you choose(workstation option). you should bring down this adapter. I believe enp0s3 is the host only adapter, currently No IP assigned to it. start enp0s3 to get connected.

Network namespace and bridging

Helo everyone, i am occasional linux user, but i have a project to do and i need some help with bridging :)
I have tried with google, but didn't solve the problem.
My task is to create network namespace, so it can be used to perform some other tasks from it.
Debian 8.2 is used in VMWare virtual machine on windows 7. I have also tried same things on Raspberry Pi 2, but same problems appear.
First, i have followed tutorial https://lwn.net/Articles/580893/ to create pair of virtual ethernet interfaces.
So now i have veth0 in global namespace with ip address 10.1.1.2/24, and veth1 in netns1 namespace with ip address 10.1.1.1/24.
Next, i have followed tutorial http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge to bridge eth0 and veth0, so i can access internet from netns1 namespace.
First, i have deleted ip addresses for both eth0 and veth0
interfaces, and set them do DOWN state.
New bridge is created (br0) and both interfaces (eth0 and veth0) are
added to it.
Then both interfaces are set to UP state, and i run "dhclient br0" to
assign ip address to br0.
From global namespace now it is possible to run "ping google.com", but from netns1 namespace i get error "Network is unreachable".
(I suppose there is problem with routes, i have tried with adding some default routes to netns1 namespace, but no luck. My network knowledge is modest, so i'm asking for help.)
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 00:0c:29:45:b6:1d brd ff:ff:ff:ff:ff:ff
4: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 86:e4:6c:02:b6:79 brd ff:ff:ff:ff:ff:ff
inet6 fe80::84e4:6cff:fe02:b679/64 scope link
valid_lft forever preferred_lft forever
5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 00:0c:29:45:b6:1d brd ff:ff:ff:ff:ff:ff
inet 192.168.178.135/24 brd 192.168.178.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe45:b61d/64 scope link
valid_lft forever preferred_lft forever
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.178.2 0.0.0.0 UG 0 0 0 br0
default 192.168.178.2 0.0.0.0 UG 1024 0 0 br0
192.168.178.0 * 255.255.255.0 U 0 0 0 br0
$ ip netns exec netns1 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether ee:b8:f3:47:f7:0c brd ff:ff:ff:ff:ff:ff
inet 10.1.1.1/24 brd 10.1.1.255 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::ecb8:f3ff:fe47:f70c/64 scope link
valid_lft forever preferred_lft forever
$ ip netns exec netns1 route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.1.1.0 * 255.255.255.0 U 0 0 0 veth1
$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c2945b61d no eth0
veth0
Thanks in advance for help :)
I have found solution.
Basically, ip forward was missing, along with 2 more steps (i have tried them before, but because of ip forward wasn't enabled, it wasn't working).
Here steps for future readers (after making bridge to work in global namespace ):
Assign ip address to veth0 in global namespace (10.1.1.2) because ip
address was deleted before creating bridge (in tutorial for bridge
they say: "The IP address needs to be set after the bridge has been
configured.")
Assign default gateway in netns1 namespace to be veth0 in global namespace "ip netns exec netns1 route add default gw 10.1.1.2"
Enable ip forwarding "echo 1 > /proc/sys/net/ipv4/ip_forward"

telnet 127.0.0.1 peer is not 127.0.0.1

I installed CenOS 7 to my machine, but there's problem with, maybe routing.
Whenever I telnet loopback address 127.0.0.1, it's peer ip is allways 192.168.9.91, while I expected 127.0.0.1 (telnet for example)
telnet 127.0.0.1 80
ss -an | grep ':80\s'
tcp ESTAB 0 0 127.0.0.1:65485 127.0.0.1:80
tcp SYN-RECV 0 0 ::ffff:127.0.0.1:80 ::ffff:192.168.9.91:65485
Could anybody suggest me reason why my machine encounter the proble. I'm absolutely networking noob. Expect some documentations :)
My machine have 4 ports, 2 public ports, 2 local ports to 2 separate local switch
$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether blabla brd ff:ff:ff:ff:ff:ff
inet 192.168.9.91/24 brd 192.168.9.255 scope global enp9s0
valid_lft forever preferred_lft forever
3: enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether blabla brd ff:ff:ff:ff:ff:ff
inet x.x.210.99/26 brd x.x.210.127 scope global enp10s0
valid_lft forever preferred_lft forever
4: enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether blabla brd ff:ff:ff:ff:ff:ff
inet x.x.211.144/26 brd x.x.211.191 scope global enp4s0f0
valid_lft forever preferred_lft forever
5: enp4s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether blabla brd ff:ff:ff:ff:ff:ff
inet 192.168.10.110/24 brd 192.168.10.255 scope global enp4s0f1
valid_lft forever preferred_lft forever
I've configured routing with only 2 public IPs, not 2 local IPs.
Thanks for reading
Problem detected.
It's masquedate problem. I turn of masquerade on my public zone, it's work normally.
If someone know the reason why, please tell me.
Thanks

Problems with vagrant, centos, dhclient

I have a problem, where the centos dhclient script keeps overwriting my /etc/resolv.conf file.
I'm using a centos7 image with vagrant 1.7.2
I have a pretty simple setup, here is my VagrantFile
config.vm.define "puppetmaster" do |pm|
## Map the local puppet configuration to the puppetmaster
pm.vm.synced_folder "./puppetmaster", "/etc/puppet"
pm.vm.provision "puppet" do |puppet|
puppet.manifests_path = ["vm", "/etc/puppet/manifests"]
puppet.manifest_file = "site.pp"
end
pm.vm.box = "puppetlabs/centos-7.0-64-puppet"
pm.vm.network "private_network", ip: "192.168.2.2"
## Enable the GUI
pm.vm.provider :virtualbox do |v|
v.gui = true
v.name = "mattlab-puppetmaster"
v.customize ["modifyvm", :id, "--memory",2048]
v.customize ["modifyvm", :id, "--cpus",4]
end
end
As you can see, I only have a network single interface configured and it's got some static settings.
I know Vagrant also has an internal interface, which it uses for communication. This seems to pick an IP using DHCP (although i'm not certain where this comes from).
I know that having a single dhcp interface will trigger the dhclient script to overwrite the resolv.conf.
As this box will be a puppetmaster and DNS server, I need to find a way to disable the vagrant dhcp interface in a way that will
My interfaces look like
[root#puppetmaster dhcp]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:19:cd:16 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 81995sec preferred_lft 81995sec
inet6 fe80::a00:27ff:fe19:cd16/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:85:18:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe85:1883/64 scope link
valid_lft forever preferred_lft forever
Still allow me to connect via the command line with vagrant ssh
Allow vagrant to use my static 192.168.2.2 address instead of the dhcp address.
Does this sound possible ?
Many thanks in advance.
Ok, answered my own question.
When dhclient-script generates a new /etc/resolv.conf it looks for a function called make_resolv_conf().
By putting the following in /etc/dhcp/dhclient-enter-hooks and making it executable, it stops overwriting it.
make_resolv_conf() {
# Do not overwrite /etc/resolv.conf.
return 0
}
Thanks
Matt
I resolved in Centos7. I hope it helps you.
In ifcfg-eth0 (in my case) it was set PEERDNS="yes". Changing to PEERDNS="no" does not resolve the problem. But
PEERDNS="no"
export PEERDNS
ifup ifcfg-eth0
did work! I suspect it is a bug in dhclient-script or its caller.

How to check what route (interface) does a destination IP address belong to without sending anything?

How to look up IP address in routing table (retrieve network interface index and MAC) without sending any packets there?
You could use the ip command:
# ip route get to 74.125.228.197
74.125.228.197 via 192.168.1.1 dev eth0 src 192.168.1.100
And once you know the interface you can get its index and link/ether(MAC) address:
# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100
link/ether 88:00:00:00:f2:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
inet6 fe80::3039:34ff:fe2c:f24c/64 scope link
valid_lft forever preferred_lft forever

Resources