No traffic when listening with netcat - linux

I have a server with multiple network interfaces.
I'm trying to run a network monitoring tools in order to verify network traffic statistics by using the sFlow standard on a router.
I get my sFlow datagram on port 5600 of eth1 interface. I'm able to see the generated traffic thanks to tcpdump:
user#lnssrv:~$ sudo tcpdump -i eth1
14:09:01.856499 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1456
14:09:02.047778 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1432
14:09:02.230895 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1300
14:09:02.340114 IP 198.51.100.253.5678 > 255.255.255.255.5678: UDP, length 111
14:09:02.385036 STP 802.1d, Config, Flags [none], bridge-id c01e.b4:a4:e3:0b:a6:00.8018, length 43
14:09:02.434658 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1392
14:09:02.634447 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
14:09:02.836015 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1364
14:09:03.059851 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1372
14:09:03.279067 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1356
14:09:03.518385 IP 10.10.10.10.60147 > 198.51.100.232.5600: UDP, length 1440
It seems all ok, but, when i try to read the packet with netcat it seems that there are no packets here:
nc -lu 5600
Indeed, sflowtool nor nprobe doesn't read anything from port 5600.
Where I'm wrong?

nc -lu 5600 is going to open a socket on port 5600, meaning that it will only dump packages that are received in that socket, i.e, packages aiming to that specific address and port.
On the other side, tcpdump collects all the traffic flowing, even without it being sent to a specific server.
Two causes of your problem here:
a) Your host IP is not 198.51.100.232
With host command you will be able exactly see TCP traffic of your server
for example : tcpdump -i eth1 host 198.51.100.232 port 80
b) There is another server that is listening in UDP port 5600 that is grabbing all the data, so, nothing is leftover for nc socket.
Notice: with TCPDUMP you will not be able to check and listen UDP ports.

Not sure that it can help here but in my case it helped ( I had similar problem ) so i just stopped "iptables" like
service iptables stop.
It seems that tcpdump works on the lower level than iptable and ipdaple can stop datagrams from being proceed to the higher level. Here a good article on this topic with nice picture.

Related

How can I determine which is my IP address on a different network?

I'm trying to launch a SNMP query from a pod uploaded in an Azure cloud to an internal host on my company's network. The snmpget queries work well from the pod to, say, a public SNMP server, but the query to my target host results in:
root#status-tanner-api-86557c6786-wpvdx:/home/status-tanner-api/poller# snmpget -c public -v 2c 192.168.118.23 1.3.6.1.2.1.1.1.0
Timeout: No Response from 192.168.118.23.
an NMAP shows that the SNMP port is open|filtered:
Nmap scan report for 192.168.118.23
Host is up (0.16s latency).
PORT STATE SERVICE
161/udp open|filtered snmp
I requested a new rule to allow 161UDP from my pod, but I'm suspecting that I requested the rule to be made for the wrong IP address.
My theory is that I should be able to determine the IP address my pod uses to access this target host if I could get inside the target host, open a connection from the pod and see using netstat which is the IP address my pod is using. The problem is that I currently have no access to this host.
So, my question is How can I see from which address my pod is reaching the target host? Some sort of public address is obviously being used, but I can't tell which one is it without entering the target host.
I'm pretty sure I'm missing an important network tool that should help me in this situation. Any suggestion would be profoundly appreciated.
By default Kubernetes will use you node ip to reach the others servers, so you need to make a firewall rule using your node IP.
I've tested using a busybox pod to reach other server in my network
Here is my lab-1 node IP with ip 10.128.0.62:
$rabello#lab-1:~ ip ad | grep ens4 | grep inet
inet 10.128.0.62/32 scope global dynamic ens4
In this node I have a busybox pod with the ip 192.168.251.219:
$ kubectl exec -it busybox sh
/ # ip ad | grep eth0 | grep inet
inet 192.168.251.219/32 scope global eth0
When perform a ping test to another server in the network (server-1) we have:
/ # ping 10.128.0.61
PING 10.128.0.61 (10.128.0.61): 56 data bytes
64 bytes from 10.128.0.61: seq=0 ttl=63 time=1.478 ms
64 bytes from 10.128.0.61: seq=1 ttl=63 time=0.337 ms
^C
--- 10.128.0.61 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.337/0.907/1.478 ms
Using tcpdump on server-1, we can see the ping requests from my pod using the node ip from lab-1:
rabello#server-1:~$ sudo tcpdump -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:16:09.291714 IP 10.128.0.62 > 10.128.0.61: ICMP echo request, id 6230, seq 0, length 64
10:16:09.291775 IP 10.128.0.61 > 10.128.0.62: ICMP echo reply, id 6230, seq 0, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
Make sure you have an appropriate firewall rule to allow your node (or your vpc range) reach your destination and check if you VPN is up (if you have one).
I hope it helps! =)

Tunnel Gre problem between two hosts (vps and dedicated server)

Hello guys i need to resolve this problem (all server have installed centos 7): i'm trying to create a gre tunnel through vps (in Italy - OpenVZ) and a dedicated server (in Germany), but they do not communicate internally (ping and ssh command tests). Next i create a gre tunnel trought vps (in Italy - OpenVZ) and vps (in France - KVM OpenStack) and their communicate, i next i had create a tunnel trought vps (in France - KVM OpenStack) and a dedicated server (in Germany) their communicate. I can not understand why the vps (in Italy - OpenVZ) and the dedicated server (in Germany) do not communicate, ideas on how I can fix (
I also tried with iptables disabled, firewalld is not enable)? Thanks
In other words:
In other attempts (by this i mean that i managed to successfully create the GRE Tunnel between these machines):
The VPS (in France) and VPS (in Italy) communicate internally (ping and ssh command tests)
The VPS (in France) and Dedicated Server (in Germany) communicate internally (ping and ssh command tests)
Problem (by this i mean that i could not successfully create the GRE Tunnel between these machines):
The VPS (in Italy) and Dedicated Server (in Germany) do not communicate internally (ping and ssh command tests). I also asked hosting services if they had any restrinzione but nothing.
My configuration:
VPS command for tunnel:
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
iptunnel add gre1 mode gre local VPS_IP remote DEDICATED_SERVER_IP ttl 255
ip addr add 192.168.168.1/30 dev gre1 ip link set gre1 up
Dedicated server command for tunnel:
iptunnel add gre1 mode gre local DEDICATED_SERVER_IP remote VPS_IP ttl 255
ip addr add 192.168.168.2/30 dev gre1
ip link set gre1 up
[root#VPS ~]# ping 192.168.168.2
PING 192.168.168.2 (192.168.168.2) 56(84) bytes of data.
^C
--- 192.168.168.2 ping statistics ---
89 packets transmitted, 0 received, 100% packet loss, time 87999ms
[root#DE ~]# ping 192.168.168.1
PING 192.168.168.1 (192.168.168.1) 56(84) bytes of data.
^C
--- 192.168.168.1 ping statistics ---
92 packets transmitted, 0 received, 100% packet loss, time 91001ms
[root#VPS ~]# tcpdump -i venet0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on venet0, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes ^C 0 packets captured 1 packet received by filter 0 packets dropped by kernel
[root#DE ~]# tcpdump -i enp2s0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp2s0, link-type EN10MB (Ethernet), capture size 262144 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel
[root#VPS ~]# lsmod | grep gre
ip_gre 4242 -2
ip_tunnel 4242 -2 sit,ip_gre
gre 4242 -2 ip_gre
[root#DE ~]# lsmod | grep gre
ip_gre 22707 0
ip_tunnel 25163 1 ip_gre
gre 13144 1 ip_gre
Console image with full command output
If ip_forwarding is required for the tunnel to work, you need to do /sbin/sysctl -p
And what does the output of ip tunnel show and ip route show on both the ends

tcpdump does not display packets seen by Wireshark

The host (seen below) receives DNS requests from another host on the same network. It has port UDP/53 closed, still the packets are displayed by Wireshark.
Indeed, the are requests sent to 192.168.16.2 on port UDP/53, so the expression should be right:
tcpdump -v -s0 udp and dst port 53 and dst 192.168.16.2
If I do:
tcpdump -v -s0 udp
the DNS requests aren't displayed either.
Why doesn't tcpdump display the DNS requests, and how can I make it display them?
If your machine has several network interfaces, then you also need to set the interface to listen on using the -i option.
Your expression would then read:
tcpdump -v -s0 -i eth1 udp and dst port 53 and dst 192.168.16.2

Node.js server listening for UDP. Tcpdump says packets are coming through, but node doesn't get them

Integrating with a partner. Our server has a restful interface, but their product emits a stream of UDP packets. Since we're still prototyping, I didn't want to make any commits to our API server repo to accommodate the change. Instead, I wrote a little node.js server to listen for their UDP packets, do a bit of conversion, and then PUT to our restful server.
I'm stuck, because the node.js process is listening on port 17270, but isn't getting any of my sample UDP packets.
Node server
const dgram = require('dgram');
const server = dgram.createSocket('udp4');
server.on('error', function(err) {
console.log('server error:\n' + err.stack);
});
server.on('message', function(msg, rinfo) {
console.log('Server got UDP packet: ' + msg + ' from ' + rinfo.address + ':' + rinfo.port + '');
doBusinessLogic(msg);
});
server.on('listening', function() {
var address = server.address();
console.log('Server listening for UDP ' + address.address + ':' + address.port + '');
});
function main() {
server.bind(17270);
}
main();
When I send a UDP packet from my local machine using netcat,
echo -n "udp content" | nc -vv4u -w1 ec2.instance 17270
I see nothing happening with the server.
I can run my node.js server locally, and it responds to UDP packets sent to 127.0.0.1.
I can also ssh into the ec2 instance, and netcat UDP packets to 127.0.0.1. That creates the expected response, as well.
So the problem must be networking, I thought.
When I run netstat on the ec2 instance, I can see that node is listening on port 17270.
# netstat -plun
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:17270 0.0.0.0:* 21308/node
I thought it might be AWS security settings, but when I run tcpdump on the ec2 instance and then trigger netcat from my local machine, I can see that there's traffic being received on the ec2 instance.
# tcpdump -vv -i any udp
15:09:52.276786 IP (tos 0x8, ttl 38, id 1756, offset 0, flags [none], proto UDP (17), length 29)
my.local.machine.60924 > ec2.instance.17270: [udp sum ok] UDP, length 1
15:09:52.276852 IP (tos 0x8, ttl 38, id 48463, offset 0, flags [none], proto UDP (17), length 29)
my.local.machine.60924 > ec2.instance.17270: [udp sum ok] UDP, length 1
15:09:52.276863 IP (tos 0x8, ttl 38, id 31296, offset 0, flags [none], proto UDP (17), length 29)
my.local.machine.60924 > ec2.instance.17270: [udp sum ok] UDP, length 1
15:09:52.278461 IP (tos 0x8, ttl 38, id 50202, offset 0, flags [none], proto UDP (17), length 29)
my.local.machine.60924 > ec2.instance.17270: [udp sum ok] UDP, length 1
15:09:52.289575 IP (tos 0x8, ttl 38, id 49316, offset 0, flags [none], proto UDP (17), length 149)
my.local.machine.60924 > ec2.instance.17270: [udp sum ok] UDP, length 121
Just to be sure, I tried temporarily closing port 17270 in the AWS console. If I do that, those packets will be discarded and I won't see any info from tcpdump. So I reopened the port.
I have a process that is listening on a port. I'm clearly sending UDP packets to that port. But the process isn't getting the messages.
I just can't figure out where the disconnect is. What am I missing?
Thanks in advance for any insight!
Probably iptables at a guess. Packets hit the BPF (which tcpdump uses to look at incoming traffic) separately from iptables, so it's possible to see them via tcpdump only to have iptables drop them before they get out of the kernel and up to your application. Look in the output of 'iptables -nvL' for either a default drop/reject policy on the relevant input chain or a rule specifically dropping UDP traffic.
As for fixing it if this is the case, it depends on which distro you're using. In the old days you'd just use the iptables command, something like this (but with the relevant chain name instead of INPUT):
iptables -A INPUT -p udp --dport 17270 -j ACCEPT
...but if you're using Ubuntu they want you to use their ufw tool for
this, and if it's CentOS/RHEL 7 you're probably going to be dealing with firewalld and its firewall-cmd frontend.

How do I route a packet to use localhost as a gateway?

I'm trying to test a gateway I wrote(see What's the easiest way to test a gateway? for context). Due to an issue I'd rather not get into, the gateway and "sender" have to be on the same machine. I have a receiver(let's say 9.9.9.9) which the gateway is able to reach.
So I'll run an application ./sendStuff 9.9.9.9 which will send some packets to that IP address.
The problem is: how do I get the packets destined for 9.9.9.9 to go to the gateway on localhost? I've tried:
sudo route add -host 9.9.9.9 gw 127.0.0.1 lo
sudo route add -host 9.9.9.9 gw <machine's external IP address> eth0
but neither of those pass packets through the gateway. I've verified that the correct IPs are present in sudo route. What can I do?
Per request, here is the route table, after running the second command(IP addresses changed to match the question. x.y.z.t is the IP of the machine I'm running this on):
Destination Gateway Genmask Flags Metric Ref Use Iface
9.9.9.9 x.y.z.t 255.255.255.255 UGH 0 0 0 eth0
x.y.z.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0
0.0.0.0 <gateway addr> 0.0.0.0 UG 100 0 0 eth0
127.0.0.1 is probably picking up the packets, then forwarding them on their merry way if ipv4 packet forwarding is enabled. If it's not enabled, it will drop them.
If you are trying to forward packets destined to 9.9.9.9 to 127.0.0.1, look into iptables.
Edit: try this:
iptables -t nat -A OUTPUT -d 9.9.9.9 -j DNAT --to-destination 127.0.0.1
That should redirect all traffic to 9.9.9.9 to localhost instead.

Resources