How do I make my Docker container communicate with another node through a 2nd interface? - linux

I am struggling to perform a pathetic test that involves the communication between a server in a sandbox01 network and a Docker container that is running in my "Docker Host" server (this machine is in the same subnet as the other nodes in the sandbox01 network. i.e., it has an interface called ens34, on the 10.* address/range. It also has an eth0 interface, on the 9.* network, which allows it to access the outside world: download packages, docker images, etc. etc.).
Anyway, here is a little diagram to illustrate what I have:
The problem:
Cannot communicate between a node in sandbox01 subnet (10.* network) and the container.
e.g., someserver.sandbox01 → mydocker2 : ens34 :: docker0 :: vethXXX → container
The communication only works when I stop iptables, which makes things really mysterious!!! Just wondering if you faced any similar issues.. any ideas would be extremely appreciated.
The mystery:
After many tests, it was confirmed that the container can't communicate with any other node in the 10.* network – it doesn't behave as expected: it was supposed to produce a response through its gateway, docker0 (172.17.0.1), and find its way through the routing table in the docker host to communicate with "someserver.sandbox01" (10.1.21.59).
It only works when we let it process the MASQUARADE in iptables. However, Docker automatically adds this rule: -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -c 0 0 -j MASQUERADE
**Note the " ! -o docker0" there, so Docker doesn't want us to mask the ip addresses that are sending requests??? This is messing up the communication somehow...
The container responds ok to any communication coming through the IP 9.* (eth0) -- i.e., I can send requests from my laptop -- but never through the 10.* (ens34). If I run a terminal within the container, the container can ping ALL the IP addresses leveraging all the mapped routes, EXCEPT, EXCEPT!!! the IP addresses in the 10.* range. Why??????
[root#mydocker2 my-nc-server]# docker run -it -p 8080:8080 --name nc-server nc-server /bin/sh
sh-4.2# ping 9.83.90.55
PING 9.83.92.20 (9.83.90.55) 56(84) bytes of data.
64 bytes from 9.83.90.55: icmp_seq=1 ttl=117 time=124 ms
64 bytes from 9.83.90.55: icmp_seq=2 ttl=117 time=170 ms
^C
--- 9.83.90.55 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 124.422/147.465/170.509/23.046 ms
sh-4.2# ping 9.32.145.98
PING 9.32.148.67 (9.32.145.98) 56(84) bytes of data.
64 bytes from 9.32.145.98: icmp_seq=1 ttl=63 time=1.37 ms
64 bytes from 9.32.145.98: icmp_seq=2 ttl=63 time=0.837 ms
^C
--- 9.32.145.98 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.837/1.104/1.372/0.269 ms
sh-4.2# ping 10.1.21.5
PING 10.1.21.5 (10.1.21.5) 56(84) bytes of data.
^C
--- 10.1.21.5 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
sh-4.2# ping 10.1.21.60
PING 10.1.21.60 (10.1.21.60) 56(84) bytes of data.
^C
--- 10.1.21.60 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms
For some reason, this interface here doesn't play well with Docker:
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.21.18 netmask 255.255.255.0 broadcast 10.1.21.255
Could this be related to the fact that the eth0 is the primary NIC for this Docker host?
The workaround:
In mydocker2 we need to stop iptables and add a new sub-interface under ens34 →
service iptables stop
ifconfig ens34:0 10.171.171.171 netmask 255.255.255.0
And in someserver.sandbox01 we need to add a new route →
route add -net 10.171.171.0 netmask 255.255.255.0 gw 10.1.21.18
Then the communication between then works. I know.. bizarre, right?
In case any of you wants to ask, no, I don't want to use the " --net=host " option to replicate the interfaces from the docker host to my container.
So, thoughts? Suggestions? Ideas?

SOLVED!!!
Inside /etc/sysconfig/network-scripts, there were 2 files:
route-ens34 and rule-ens34-
if you remove those, and restart the network, it should start working.
Cheers!

Related

How can I determine which is my IP address on a different network?

I'm trying to launch a SNMP query from a pod uploaded in an Azure cloud to an internal host on my company's network. The snmpget queries work well from the pod to, say, a public SNMP server, but the query to my target host results in:
root#status-tanner-api-86557c6786-wpvdx:/home/status-tanner-api/poller# snmpget -c public -v 2c 192.168.118.23 1.3.6.1.2.1.1.1.0
Timeout: No Response from 192.168.118.23.
an NMAP shows that the SNMP port is open|filtered:
Nmap scan report for 192.168.118.23
Host is up (0.16s latency).
PORT STATE SERVICE
161/udp open|filtered snmp
I requested a new rule to allow 161UDP from my pod, but I'm suspecting that I requested the rule to be made for the wrong IP address.
My theory is that I should be able to determine the IP address my pod uses to access this target host if I could get inside the target host, open a connection from the pod and see using netstat which is the IP address my pod is using. The problem is that I currently have no access to this host.
So, my question is How can I see from which address my pod is reaching the target host? Some sort of public address is obviously being used, but I can't tell which one is it without entering the target host.
I'm pretty sure I'm missing an important network tool that should help me in this situation. Any suggestion would be profoundly appreciated.
By default Kubernetes will use you node ip to reach the others servers, so you need to make a firewall rule using your node IP.
I've tested using a busybox pod to reach other server in my network
Here is my lab-1 node IP with ip 10.128.0.62:
$rabello#lab-1:~ ip ad | grep ens4 | grep inet
inet 10.128.0.62/32 scope global dynamic ens4
In this node I have a busybox pod with the ip 192.168.251.219:
$ kubectl exec -it busybox sh
/ # ip ad | grep eth0 | grep inet
inet 192.168.251.219/32 scope global eth0
When perform a ping test to another server in the network (server-1) we have:
/ # ping 10.128.0.61
PING 10.128.0.61 (10.128.0.61): 56 data bytes
64 bytes from 10.128.0.61: seq=0 ttl=63 time=1.478 ms
64 bytes from 10.128.0.61: seq=1 ttl=63 time=0.337 ms
^C
--- 10.128.0.61 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.337/0.907/1.478 ms
Using tcpdump on server-1, we can see the ping requests from my pod using the node ip from lab-1:
rabello#server-1:~$ sudo tcpdump -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:16:09.291714 IP 10.128.0.62 > 10.128.0.61: ICMP echo request, id 6230, seq 0, length 64
10:16:09.291775 IP 10.128.0.61 > 10.128.0.62: ICMP echo reply, id 6230, seq 0, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
Make sure you have an appropriate firewall rule to allow your node (or your vpc range) reach your destination and check if you VPN is up (if you have one).
I hope it helps! =)

Tunnel Gre problem between two hosts (vps and dedicated server)

Hello guys i need to resolve this problem (all server have installed centos 7): i'm trying to create a gre tunnel through vps (in Italy - OpenVZ) and a dedicated server (in Germany), but they do not communicate internally (ping and ssh command tests). Next i create a gre tunnel trought vps (in Italy - OpenVZ) and vps (in France - KVM OpenStack) and their communicate, i next i had create a tunnel trought vps (in France - KVM OpenStack) and a dedicated server (in Germany) their communicate. I can not understand why the vps (in Italy - OpenVZ) and the dedicated server (in Germany) do not communicate, ideas on how I can fix (
I also tried with iptables disabled, firewalld is not enable)? Thanks
In other words:
In other attempts (by this i mean that i managed to successfully create the GRE Tunnel between these machines):
The VPS (in France) and VPS (in Italy) communicate internally (ping and ssh command tests)
The VPS (in France) and Dedicated Server (in Germany) communicate internally (ping and ssh command tests)
Problem (by this i mean that i could not successfully create the GRE Tunnel between these machines):
The VPS (in Italy) and Dedicated Server (in Germany) do not communicate internally (ping and ssh command tests). I also asked hosting services if they had any restrinzione but nothing.
My configuration:
VPS command for tunnel:
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
iptunnel add gre1 mode gre local VPS_IP remote DEDICATED_SERVER_IP ttl 255
ip addr add 192.168.168.1/30 dev gre1 ip link set gre1 up
Dedicated server command for tunnel:
iptunnel add gre1 mode gre local DEDICATED_SERVER_IP remote VPS_IP ttl 255
ip addr add 192.168.168.2/30 dev gre1
ip link set gre1 up
[root#VPS ~]# ping 192.168.168.2
PING 192.168.168.2 (192.168.168.2) 56(84) bytes of data.
^C
--- 192.168.168.2 ping statistics ---
89 packets transmitted, 0 received, 100% packet loss, time 87999ms
[root#DE ~]# ping 192.168.168.1
PING 192.168.168.1 (192.168.168.1) 56(84) bytes of data.
^C
--- 192.168.168.1 ping statistics ---
92 packets transmitted, 0 received, 100% packet loss, time 91001ms
[root#VPS ~]# tcpdump -i venet0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on venet0, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes ^C 0 packets captured 1 packet received by filter 0 packets dropped by kernel
[root#DE ~]# tcpdump -i enp2s0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp2s0, link-type EN10MB (Ethernet), capture size 262144 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel
[root#VPS ~]# lsmod | grep gre
ip_gre 4242 -2
ip_tunnel 4242 -2 sit,ip_gre
gre 4242 -2 ip_gre
[root#DE ~]# lsmod | grep gre
ip_gre 22707 0
ip_tunnel 25163 1 ip_gre
gre 13144 1 ip_gre
Console image with full command output
If ip_forwarding is required for the tunnel to work, you need to do /sbin/sysctl -p
And what does the output of ip tunnel show and ip route show on both the ends

mosquitto subscribe to test broker: Network is unreachable

I'm trying to subscribe to http://test.mosquitto.org/ with the following command:
mosquitto_sub -h test.mosquitto.org -p 1883 -t "#" -v
When doing so, it first says nothing and after a few minutes it prints Error: Network is unreachable. To make sure I also tried to subscribe to https://iot.eclipse.org/ and also tried to use the ip instead of the DNS name of the broker.
Does anybody know how I can subscribe to the broker?
EDIT: I can ping test.mosquitto.org
mosquitto_sub -h test.mosquitto.org -p 1883 -t "#" -v
When doing so, it first says nothing and after a few minutes it prints
Error: Network is unreachable
seems like dns is isn’t resolved, to test check the dns is resolved first simple test would be to do a ping,
ping test.mosquitto.org
If its reachable it should print out following:
ping test.mosquitto.org
PING test.mosquitto.org (37.187.106.16) 56(84) bytes of data.
64 bytes from ks.ral.me (37.187.106.16): icmp_seq=1 ttl=54 time=15.4 ms
64 bytes from ks.ral.me (37.187.106.16): icmp_seq=2 ttl=54 time=15.3 ms
^C
--- test.mosquitto.org ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 15.311/15.393/15.476/0.148 ms
Once dns is resolved following should work without any errors:
mosquitto_sub -h test.mosquitto.org -p 1883 -t "#" -v

squid proxy client setup in linux machine

What is the right setting for setting the squid proxy client in linux machine i followed the documentation to setup the export variable as following
bash $ export http_proxy="http://10.20.5.48:3128"
bash $ ping google.com
PING google.com (74.125.228.197) 56(84) bytes of data.
^C
--- google.com ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 922ms
bash $ export http_proxy="http://10.20.5.48:3128/"
bash $ ping google.com
PING google.com (173.194.123.110) 56(84) bytes of data.
bash $ export HTTP_PROXY="http://10.20.5.48:3128"
bash $ ping google.com
PING google.com (74.125.228.196) 56(84) bytes of data.
^C
--- google.com ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1086ms
bash $ export HTTP_PROXY="http://10.20.5.48:3128/"
bash $ ping google.com
PING google.com (74.125.228.195) 56(84) bytes of data.
^C
--- google.com ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1160ms
The squid server is running on port 3128 and reachable and no firewall or acl Issue with the squid.conf also
bash $ telnet 10.20.5.48 3128
Trying 10.20.5.48...
Connected to 10.20.5.48.
Escape character is '^]'.
When i change the yum.conf to use the proxy with the same server and IP the yum configuration work
Ping not use http-proxy.
Try
GET / HTTP/1.1
Host: google.com
in your telnet session.

How can I check whether a subnet IPv4 address is unused while IPv4 is switched off?

I've found sources that suggest how I can find whether an IP address is available using ping, arping and nmap, but all of these solutions fail when IPv4 is switched off. I'd like to find a way of automatically detecting whether an IPv4 address is available before assigning it to a new machine that does not have an IPv4 address. For example,
$ sudo arping 192.168.2.205
ARPING 192.168.2.205
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=0 time=730.931 msec
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=1 time=362.976 msec
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=2 time=730.205 msec
^C
--- 192.168.2.205 statistics ---
4 packets transmitted, 3 packets received, 25% unanswered (0 extra)
$ sudo ifconfig eth0 0.0.0.0
$ sudo arping 192.168.2.205
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
arping: Unable to get the IPv4 address of interface eth0:
arping: libnet_get_ipaddr4(): ioctl(): Cannot assign requested address
arping: Use -S to specify address manually.
Is there a way to achieve this?

Resources