Netcat server and intermittent UDP datagram loss - linux

The client on enp4s0 (192.168.0.77) is sending short text-messages permanently to 192.168.0.1:6060. The server on 192.168.0.1 listen on 6060 via nc -ul4 --recv-only 6060
A ping (ping -s 1400 192.168.0.77) from server to client works fine. Wireshark is running on 192.168.0.1 (enp4s0) and detects that all datagrams are correct. There are no packages missing.
But netcat (as also a simple UDP-server) receives only sporadic on datagrams.
Any Idea what's going wrong?
System configuration:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# ip route
default via 192.168.77.1 dev enp0s25 proto dhcp metric 100
default via 192.168.0.1 dev enp4s0 proto static metric 101
192.168.0.0/24 dev enp4s0 proto kernel scope link src 192.168.0.1 metric 101
192.168.77.0/24 dev enp0s25 proto kernel scope link src 192.168.77.25 metric 100
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1
# uname -a
Linux nadhh 5.1.20-300.fc30.x86_64 #1 SMP Fri Jul 26 15:03:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Related

ip forward not working inside a netns on CentOS7

I was trying to build a virtual network with a virtual machine and 2 virtual routers.
VM -> Router1 -> Router2 -> External network
Router1 does SNAT and works well. Router2 is expected to do ip forwarding, but not working.
Here are details of Router2 I've checked. (Router2 is inside netns d3dcb2df-f3ca-4079-a434-491b23f84b5a.)
NICs and addresses
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-70aabff6-c8#if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:29:3b:ea brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.1/24 brd 192.168.1.255 scope global qr-70aabff6-c8
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe29:3bea/64 scope link
valid_lft forever preferred_lft forever
3: qg-30c10598-27#if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:fc:1b:5b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.52.82/24 brd 10.10.52.255 scope global qg-30c10598-27
valid_lft forever preferred_lft forever
inet 10.10.52.158/32 brd 10.10.52.158 scope global qg-30c10598-27
valid_lft forever preferred_lft forever
inet 10.10.52.73/32 brd 10.10.52.73 scope global qg-30c10598-27
valid_lft forever preferred_lft forever
inet 10.10.52.68/32 brd 10.10.52.68 scope global qg-30c10598-27
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fefc:1b5b/64 scope link
valid_lft forever preferred_lft forever
route rules
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a ip route
default via 10.10.52.1 dev qg-30c10598-27
10.10.52.0/24 dev qg-30c10598-27 proto kernel scope link src 10.10.52.82
192.168.1.0/24 dev qr-70aabff6-c8 proto kernel scope link src 192.168.1.1
forwarding is turned on
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a sysctl net.ipv4.conf.qr-70aabff6-c8.forwarding
net.ipv4.conf.qr-70aabff6-c8.forwarding = 1
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a sysctl net.ipv4.conf.qg-30c10598-27.forwarding
net.ipv4.conf.qg-30c10598-27.forwarding = 1
iptables rules are cleared
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t mangle -F
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t nat -F
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t filter -F
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t mangle -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain neutron-l3-agent-FORWARD (0 references)
target prot opt source destination
Chain neutron-l3-agent-INPUT (0 references)
target prot opt source destination
Chain neutron-l3-agent-OUTPUT (0 references)
target prot opt source destination
Chain neutron-l3-agent-POSTROUTING (0 references)
target prot opt source destination
Chain neutron-l3-agent-PREROUTING (0 references)
target prot opt source destination
Chain neutron-l3-agent-float-snat (0 references)
target prot opt source destination
Chain neutron-l3-agent-floatingip (0 references)
target prot opt source destination
Chain neutron-l3-agent-mark (0 references)
target prot opt source destination
Chain neutron-l3-agent-scope (0 references)
target prot opt source destination
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain neutron-l3-agent-OUTPUT (0 references)
target prot opt source destination
Chain neutron-l3-agent-POSTROUTING (0 references)
target prot opt source destination
Chain neutron-l3-agent-PREROUTING (0 references)
target prot opt source destination
Chain neutron-l3-agent-float-snat (0 references)
target prot opt source destination
Chain neutron-l3-agent-snat (0 references)
target prot opt source destination
Chain neutron-postrouting-bottom (0 references)
target prot opt source destination
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a iptables -t filter -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain neutron-filter-top (0 references)
target prot opt source destination
Chain neutron-l3-agent-FORWARD (0 references)
target prot opt source destination
Chain neutron-l3-agent-INPUT (0 references)
target prot opt source destination
Chain neutron-l3-agent-OUTPUT (0 references)
target prot opt source destination
Chain neutron-l3-agent-local (0 references)
target prot opt source destination
Chain neutron-l3-agent-scope (0 references)
target prot opt source destination
Finally when I ping 8.8.8.8 from the VM, the router can only see packets received, no packets forwarded.
[root#controller ~]# ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a tcpdump -i any -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
14:00:37.138271 IP 10.10.52.140 > 8.8.8.8: ICMP echo request, id 9616, seq 10258, length 64
14:00:38.139298 IP 10.10.52.140 > 8.8.8.8: ICMP echo request, id 9616, seq 10259, length 64
14:00:39.140488 IP 10.10.52.140 > 8.8.8.8: ICMP echo request, id 9616, seq 10260, length 64
Thanks for any help.
Thank God. I finally found the answer after digging into the kernel source. There is a little switch 'rp_filter' that tells the kernel to drop 'bad' packets. Here is the full description from the kernel doc:
rp_filter - INTEGER
0 - No source validation.
1 - Strict mode as defined in RFC3704 Strict Reverse Path
Each incoming packet is tested against the FIB and if the interface
is not the best reverse path the packet check will fail.
By default failed packets are discarded.
2 - Loose mode as defined in RFC3704 Loose Reverse Path
Each incoming packet's source address is also tested against the FIB
and if the source address is not reachable via any interface
the packet check will fail.
Current recommended practice in RFC3704 is to enable strict mode
to prevent IP spoofing from DDos attacks. If using asymmetric routing
or other complicated routing, then loose mode is recommended.
The max value from conf/{all,interface}/rp_filter is used
when doing source validation on the {interface}.
Default value is 0. Note that some distributions enable it
in startup scripts.
In my circumstances, turning it off like this is good:
ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a sysctl net.ipv4.conf.all.rp_filter=0
ip netns exec qrouter-d3dcb2df-f3ca-4079-a434-491b23f84b5a sysctl net.ipv4.conf.qr-70aabff6-c8.rp_filter=0

Can't access nginx accessible on private ip but not on public ip

I'm creating resources using terraform and I have a node in the subnet 172.1.0.0 with no security group or rules assigned to the node; the node has two endpoints 22 and 80.
I used nmap to confirm that the ports are open and they are:
nmap -Pn -sT -p T:11 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4
Host is up (0.31s latency).
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 1.34 seconds
nmap -Pn -sT -p T:80 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4)
Host is up (0.036s latency).
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
I've installed nginx on the node and running curl 172.1.0.4 and curl 127.0.0.1 and curl 0.0.0.0 gets me the default nginx page.
However running curl <public ip> or curl <dns name> hangs and I get a
curl: (56) Recv failure: Connection reset by peer
My iptables rules are:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT udp -- anywhere anywhere udp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Why can't I possibly access nginx at the node's public ip?
The node is in a virtual network 172.1.0.0/16 and a subnet of 172.1.0.0/24 I can post the terraform configs if needed.

Web page not reachable

I am installing a Musicbox Frontend on a Debian Server.
Everything works on the local server, by accessing 127.0.0.1:6680.
On other machines in the same subnet i can't reach this webpage by using 192.168.0.50:6680
I added the port to the ip-table, i have this output:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:6600
ACCEPT tcp -- anywhere anywhere tcp dpt:6680
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:6680
ACCEPT tcp -- anywhere anywhere tcp dpt:6600
When i use nmap to inspect the ports, the port doesn't seem to be reachable
Starting Nmap 6.47 ( http://nmap.org ) at 2015-02-08 03:32 Romance Standard Time
NSE: Loaded 118 scripts for scanning.
NSE: Script Pre-scanning.
Initiating ARP Ping Scan at 03:32
Scanning 192.168.0.50 [1 port]
Completed ARP Ping Scan at 03:32, 0.13s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 03:32
Completed Parallel DNS resolution of 1 host. at 03:32, 0.02s elapsed
Initiating SYN Stealth Scan at 03:32
Scanning 192.168.0.50 [1000 ports]
Discovered open port 22/tcp on 192.168.0.50
Discovered open port 3389/tcp on 192.168.0.50
Completed SYN Stealth Scan at 03:32, 0.21s elapsed (1000 total ports)
Initiating Service scan at 03:32
Scanning 2 services on 192.168.0.50
Completed Service scan at 03:33, 6.01s elapsed (2 services on 1 host)
Initiating OS detection (try #1) against 192.168.0.50
NSE: Script scanning 192.168.0.50.
Initiating NSE at 03:33
Completed NSE at 03:33, 1.12s elapsed
Nmap scan report for 192.168.0.50
Host is up (0.0023s latency).
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 6.0p1 Debian 4+deb7u2 (protocol 2.0)
3389/tcp open ms-wbt-server xrdp
MAC Address: XXXXXXXXXXXX
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.11 - 3.14
Uptime guess: 0.031 days (since Sun Feb 08 02:47:48 2015)
Network Distance: 1 hop
TCP Sequence Prediction: Difficulty=262 (Good luck!)
IP ID Sequence Generation: All zeros
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
TRACEROUTE
HOP RTT ADDRESS
1 2.27 ms 192.168.0.50
the Musicbox listen address is 127.0.0.1:6680? if so you can't reach this webpage by using 192.168.0.50:6680, you can inspect it by using netstat -anop | grep 6680

Strongswan RoadWarrior VPN-Config

I want to setup an VPN-Server for my local web traffic (iPhone/iPad/MacBook).
So far I managed to setup basic configuration with CA & Client-Cert.
For the moment my client can connect to the server and access server resources, but has no route to the internet.
The server is accessible directly via public IP (no home installation...).
What do I need to change to route all my client traffic through the VPN-Server and enable internet access for my clients?
Thanks in advance
/etc/ipsec.conf
config setup
conn rw
keyexchange=ikev1
authby=xauthrsasig
xauth=server
auto=add
#
#LEFT (SERVER)
left=%defaultroute
leftsubnet=0.0.0.0/0
leftfirewall=yes
leftcert=serverCert.pem
#
#RIGHT (CLIENT)
right=%any
rightsubnet=10.0.0.0/24
rightsourceip=10.0.0.0/24
rightcert=clientCert.pem
iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 10.0.0.1 anywhere policy match dir in pol ipsec reqid 1 proto esp
ACCEPT all -- anywhere 10.0.0.1 policy match dir out pol ipsec reqid 1 proto esp
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Found the solution!
/etc/ipsec.conf
rightsubnet=10.0.0.0/24
iptables
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE
System
sysctl net.ipv4.ip_forward=1

snmpd is not listening on port 161 on Ubuntu server

I have installed snmpd on my Ubuntu server via apt-get install snmpd snmp. Then I changed the line in /etc/default/snmpd
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid 0.0.0.0'
After that, I restarted the snmpd server(/etc/init.d/snmpd restart). However, when I ran netstat -an | grep "LISTEN ", I don't see snmpd is listening on port 161.
I don't have any firewall which blocks that port.
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
User "nos" is correct; UDP bindings do not show up as "LISTEN" under "netstat". Instead, you will see a line or two like the following, showing that "snmpd" is indeed ready to receive data on UDP port 161:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:161 0.0.0.0:* 1785/snmpd
udp6 0 0 ::1:161 :::* 1785/snmpd
The "netstat" manpage has this to say about the "State" column:
The state of the socket. Since there are no states in raw mode and usually no states used in UDP, this column may be left blank.
Thus, you would not expect to see the word "LISTEN" here.
From a practical perspective, however, there is one more thing that I'd like to note. Often, the default Net-SNMP "snmpd.conf" configuration file limits incoming connections to only local processes.
Default /etc/snmp/snmpd.conf
# Listen for connections from the local system only
agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
#agentAddress udp:161,udp6:[::1]:161,tcp:161,tcp6:[::1]:161
Usually, the point of setting up "snmpd" is so that another machine can monitor it. To accomplish this, make sure that the first line is commented out and that the second line is enabled.
Looks like it is listening on 161/UDP. From the man page:
By default, snmpd listens for incoming SNMP requests on UDP port 161 on all IPv4 interfaces. However, it is possible to modify this behaviour by specifying one or more listening addresses as arguments to snmpd. A listening address takes the form: [<transport-specifier>:]<transport-address>
Read the man page for more details

Resources