Why do I see duplicate API Call when using Python requests library deployed on container, one from container IP and one from Host IP? - python-3.x

I have a python application deployed in docker container. For each request I sent to the external API server (rest api), I can see two HTTP requests on tcpdump; one from container IP, another from host IP.
I tried to check if the program is triggering 2 requests, but only 1 is found in logs and for every trigger request is logged
With tcp dump capture
tcpdump -i any -nn host 10.222.xx.yy

You haven't shown us how you're running tcpdump, but I suspect that rather than seeing multiple requests, you're seeing the same request as it gets routed through your host.
If on my host I run:
tcpdump -i any -nn host stackoverflow.com
And then I spin up a container and run curl stackoverflow.com, I will see three results for every packet in the exchange (note that I've performed some manual alignment here to make things a bit easier to read):
07:25:32.969736 veth7b48792 P IP 172.17.0.2.33710 > 151.101.1.69.80: ...
07:25:32.969741 docker0 In IP 172.17.0.2.33710 > 151.101.1.69.80: ...
07:25:32.969747 eth0 Out IP 192.168.1.200.33710 > 151.101.1.69.80: ...
^
|
+-- this column is the interface name
The first result shows the packet exiting the container (recall that a veth virtual wire connects eth0 inside the container to a device -- in this case veth8b48792 -- on the host, which is what gets added to the docker bridge).
The second result shows the packet traversing the docker bridge.
The third result shows the packet leaving the host's primary interface (after address translation in the nat table).
If you limit tcpdump to a single interface, you should only see a single match for each packet in the request. E.g., if I were to run tcpdump -i eth0 -nn host stackoverflow.com, I would see only:
07:46:33.505078 IP 192.168.1.200.50194 > 151.101.65.69.80: ...

Related

Packet capturing in l3fwd

I am performing a dpdk experiment. In my setup, I have two physical machines, Host1 and Host2 with 2 10Gbps NICs on each. One interface of Host1 is bounded with dpdk and generating traffic using pktgen. Both interfaces of Host2 are bounded with dpdk and l3fwd is running as packet forwarding application. Second NIC of Host2 is used to capture the packets. I want to breakdown the delay experienced by a packet by seeing the time spent in each interface of Host2.
Is there any way to capture packets of dpdk interfaces using l3fwd as packet forwarding applications?
For DPDK interfaces you can make use DPDK-PDUMP capture to get packets from DPDK bonded nic. Refer https://doc.dpdk.org/guides-16.07/sample_app_ug/pdump.html.
Application l3fwd is to be modified with rte_pdump_init API call right after rte_eal_init. This will enable multi-proecss communication channel, there by when dpdk-pdump (secondary) application is run rte_ring and packet copy is enabled to copy the content over.
Note: please check DPDK PDUmp App on usage. FOr example to copy packets from port 0 and queue 1 use sudo ./[path to applciation]/dpdk-pdump -- --pdump 'port=0,queue=1,rx-dev=/tmp/port0_queue1.pcap'
pdump is good tool to capture packets at any port binded to dpdk. Launch the pdump tool as follows:
sudo ./build/app/dpdk-pdump -- --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
and after packets are received, run the following command in home/temp directory to view them
tcpdump -nr ./capture.pcap

Tunnel dynamic UDP port range

Usually I prefer finding a solution on my own, but unfortunately that didn't work out too well this time so I'm asking here.
I'm trying to deal with a server (rather a computer with no screen and debian minimal on it) which is on the usual home network. Problem is the ISP is running out of ipv4 addresses and therefore
decided to use ipv6 instead and dual-stack lite to access the ipv4 side of the internet. This means the computer is not accessible over the ipv4 address from the outside
but is able to connect to a ipv4 computer.
I do have a vserver (debian as well) which still uses only ipv4, so my plan was to use it as some kind of relay or porxy. Problem there is, I am not able to use iptables to configure NAT
since the server provider has removed that module from the kernel.
My first attempt was to use an SSH tunnel like this:
ssh -f user#vserver -R 2222:localhost:22 -N
This allows me to access the CLI over SSH which now listens on port 2222.
Next step was to open a second SSH tunnel and tunnel UDP traffic through that using socat:
homeserver:~# socat tcp4-listen:${tcpport of second tunnel},reuseaddr,fork udp:localhost:${udpport to forward traffic from}
vserver:~# socat -T15 udp4-recvfrom:${udpport to forward traffic to},reuseaddr,fork tcp:localhost:${tcpport of second tunnel}
This does work, however once the client application is trying to connect to the UDP port, the server application is trying to continue the communication on a different new port from the dynamic
port range (Ephemeral Port Range I think). That one random port of course is not being forwarded since socat is not listening to.
The second attempt also involved an SSH tunnel, only a dynamic one this time (basically a socks proxy).
I was trying to setup a virtual network device to route all the traffic through the socks proxy:
(As described in man pages from badvpn-tun2socks)
homeserver:~# openvpn --mktun --dev tun0 --user <someuser> #create tun0 device
homeserver:~# ifconfig tun0 10.0.0.1 netmask 255.255.255.0 #configure it
homeserver:~# route add <IP_vserver> gw <IP_of_original_gateway> metric #Route all traffic through tun0
homeserver:~# route add default gw 10.0.0.2 metric 6 #exept the ones to the vserver
homeserver:~# badvpn-tun2socks --tundev tun0 --netif-ipaddr 10.0.0.2 --netif-netmask 255.255.255.0 --socks-server-addr 127.0.0.1:1080 \
--udpgw-remote-server-addr 127.0.0.1:7300
This needs to SSH socks-proxies since upd needs to be handled seperately.
On the vserver side of things these need to be handled as well:
vserver:~# badvpn-udpgw --listen-addr 127.0.0.1:7300
The connection between both is successful but this time the homeserver is not accessible at all. (seems to me like the vserver has no clue what to do with the packets)
I hope there is a simple fix to either of my attempts. But as it stands now,
I think my whole approach is fundamentally flawed and I'm starting to run out of ideas.
Any help would be appreciated, Thanks in advance!

Assigning a machine IP under linux

I have a machine with 3 interfaces, two public and one local. They are all from the same IP range. All externally accessible services on the machine use the internal IP.
However, when a process opens a connection to the outside world, the source address in the packets is set to that of the public interface which will be used to sends the packets out.
Is it possible under Linux to force all source addresses to be that of a particular interface, even if the packet then will be routed through an other interface?
Say we have a machine with 3 interfaces, A.B.C.1, A.B.C.2, A.B.C.3. Of these A.B.C.1 and A.B.C.2 are connected to the Internet (and A.B.C.0/24 is routed to them). All services on the machine listen to A.B.C.3. Is it possible to guarantee that all packets originating on the machine will have the source address of A.B.C.3, even if they will leave the machine via A.B.C.1 or A.B.C.2 ?
Specifying the source address when the socket for the outgoing connection is opened is not a solution; we're talking about existing programs which can not be changed. Also, it should work for ICMP as well.
Thanks.
This can be achieved with iptables:
sudo iptables -t nat -A OUTPUT -j SNAT --to A.B.C.3
If the iptables service is not running, it can be activated via the following command:
sudo service iptables restart

tcpdump returns 0 packets captured, received and dropped

I am currently trying to debug a networking problem that has been plaguing me for almost three weeks. I'm working with openstack and can create virtual machines and networks fine but cannot connect to them at all. When I run this command from the server, i have to ctrl+c to stop the time-out and it returns:
[root#xxxxxx ~(keystone_admin)]# tcpdump -i any -n -v 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] = icmp-echo'
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
I'm not sure if this is exclusively and OpenStack problem or just a networking problem in general, but i know that 'tcpdump' is supposed to return something other than 0 packets captured, received or dumped. I am new to networking and therefore do not have much experience so please be gentle. Any help is appreciated. Thanks.
tcpdump is the right tool to dump ip packets. But if your openstack security group rules blocks ICMP, 0 ICMP packets are expected.
I just want to understand what do you mean by "cannot connect to the virtual machines at all". ping command doesn't work? or other protocol like ssh or HTTP.
Generally the first common problem when connecting to OpenStack VM is the security group rules. the default one disallow ICMP protocol. You can run the following command to see the rules:
nova secgroup-list: it usually returns a default one
nova secgroup-rules-list default: it will show the defined rules. where there must be at least one rule to allow ICMP protocol.
Here's the official doc to tell how to add rules allowing ICMP and SSH.

using netcat for external loop-back test between two ports

I am writing a test script to exercise processor boards for a burn-in cycle during manufacturing. I would like to use netcat to transfer files from one process, out one Ethernet port and back into another Ethernet port to a receiving process. It looks like netcat would be an easy tool to use for this.
The problem is that if I set up the ethernet ports with IP addresses on separate IP sub nets and attempt to transfer data from one to the other, the kernel's protocol stack detects an internal route and although the data transfer completes as expected, it does NOT go out over the wire. The packets are routed internally.
That's great for network optimization but it foils the test I want to do.
Is there easy way to make this work? Is there a trick with iptables that would work? Or maybe things you can do to the route table?
I use network name spaces to do this sort of thing. With each of the adapters in a different namespace the data traffic definitely goes through the wire instead of reflecting in the network stack. The separate namespaces also prevent reverse packet filters and such from getting in the way.
So presume eth0 and eth1, wiht iperf3 as the reflecting agent (ping server or whatever). [DISCLAIMER:text from memory, all typos are typos, YMMV]
ip netns add target
ip link set dev eth1 up netns target
ip netns exec target ip address add dev eth1 xxx.xxx.xxx.xxx/y
ip netns exec target iperf3 --server
So now you've created the namespace "target", moved one of your adapters into that namespace. Set its IP address. And finally run your application in the that target namespace.
You can now run any (compatible) program in the native namespace, and if it references the xxx.xxx.xxx.xxx IP address (which clearly must be reachable with some route) will result in on-wire traffic that, with a proper loop-back path, will find the adapter within the other namespace as if it were a different computer all together.
Once finished, you kill the daemon server and delete the namespace by name and then the namespace members revert and you are back to vanilla.
killall iperf3
ip netns delete target
This also works with "virtual functions" of a single interface, but that example requires teasing out one or more virtual functions --- e.g. SR-IOV type adapters -- and handing out local mac addresses. So I haven't done that enough to have a sample code tidbit ready.
Internal routing is preferred because in the default routing behaviour you have all the internal routes marked as scope link in the local table. Check this out with:
ip rule show
ip route show table local
If your kernel supports multiple routing tables you can simply alter the local table to achieve your goal. You don't need iptables.
Let's say 192.168.1.1 is your target ip address and eth0 is the interface where you want to send your packets out to the wire.
ip route add 192.168.1.1/32 dev eth0 table local

Resources