firewalld port forward to k8s node port not working - linux

I want to configure port forward 80->32181, 443->30598. 32181 and 30598 is NodePort of k8s ingress controller which i can establish connection correctly:
$ curl http://localhost:32181
<html>
<head><title>404 Not Found</title></head>
<body>
...
$ curl https://localhost:30598 -k
<html>
<head><title>404 Not Found</title></head>
<body>
...
What I have done is:
$ cat /proc/sys/net/ipv4/ip_forward
1
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client frp http https kube-apiserver kube-kubelet ssh
ports:
protocols:
forward: no
masquerade: yes
forward-ports:
port=80:proto=tcp:toport=32181:toaddr=
port=443:proto=tcp:toport=30598:toaddr=
source-ports:
icmp-blocks:
rich rules:
but I cant access my nginx via 80 or 443:
$ curl https://localhost:443 -k
curl: (7) Failed to connect to localhost port 443: Connection refused
and more info:
centos: 8.2 4.18.0-348.2.1.el8_5.x86_64
k8s: 1.22(with calico(v3.21.0) network plugin)
firewalld: 0.9.3
and iptables output:
$ iptables -nvL -t nat --line-numbers
Chain PREROUTING (policy ACCEPT 51 packets, 2688 bytes)
num pkts bytes target prot opt in out source destination
1 51 2688 cali-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:6gwbT8clXdHdC1b1 */
2 51 2688 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
3 51 2688 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 50 packets, 2648 bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 1872 packets, 112K bytes)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:O3lYWMrLQYEMJtB5 */
2 1862 112K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
3 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 1922 packets, 116K bytes)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
2 1911 115K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
3 758 45480 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-SERVICES (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
2 0 0 KUBE-SVC-Z6GDYMWE5TV2NNJN tcp -- * * 0.0.0.0/0 10.110.193.197 /* kubernetes-dashboard/dashboard-metrics-scraper cluster IP */ tcp dpt:8000
3 0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
4 0 0 KUBE-SVC-EDNDUDH2C75GIR6O tcp -- * * 0.0.0.0/0 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
5 0 0 KUBE-SVC-EZYNCFY2F7N6OQA2 tcp -- * * 0.0.0.0/0 10.103.242.141 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP */ tcp dpt:443
6 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
7 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
8 0 0 KUBE-SVC-CEZPIJSAUFW5MYPQ tcp -- * * 0.0.0.0/0 10.97.166.112 /* kubernetes-dashboard/kubernetes-dashboard cluster IP */ tcp dpt:443
9 0 0 KUBE-SVC-H5K62VURUHBF7BRH tcp -- * * 0.0.0.0/0 10.104.154.95 /* lens-metrics/kube-state-metrics:metrics cluster IP */ tcp dpt:8080
10 0 0 KUBE-SVC-MOZMMOD3XZX35IET tcp -- * * 0.0.0.0/0 10.96.73.22 /* lens-metrics/prometheus:web cluster IP */ tcp dpt:80
11 0 0 KUBE-SVC-CG5I4G2RS3ZVWGLK tcp -- * * 0.0.0.0/0 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:http cluster IP */ tcp dpt:80
12 1165 69528 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-POSTROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 1859 112K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
2 3 180 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
3 3 180 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully
Chain KUBE-MARK-DROP (0 references)
num pkts bytes target prot opt in out source destination
1 0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-NODEPORTS (1 references)
num pkts bytes target prot opt in out source destination
1 2 120 KUBE-SVC-EDNDUDH2C75GIR6O tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30598
2 1 60 KUBE-SVC-CG5I4G2RS3ZVWGLK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp dpt:32181
Chain KUBE-MARK-MASQ (27 references)
num pkts bytes target prot opt in out source destination
1 3 180 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-SEP-IPE5TMLTCUYK646X (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:192.168.103.147:9153
Chain KUBE-SEP-3LZLTHU4JT3FAVZK (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ tcp to:192.168.103.149:9153
Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
2 0 0 KUBE-SEP-IPE5TMLTCUYK646X all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-3LZLTHU4JT3FAVZK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:metrics */
Chain KUBE-SEP-ZOAMCQDU54EOM4EJ (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.141 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */ tcp to:192.168.103.141:8000
Chain KUBE-SVC-Z6GDYMWE5TV2NNJN (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.110.193.197 /* kubernetes-dashboard/dashboard-metrics-scraper cluster IP */ tcp dpt:8000
2 0 0 KUBE-SEP-ZOAMCQDU54EOM4EJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/dashboard-metrics-scraper */
Chain KUBE-SEP-HYE2IFAO6PORQFJR (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.0.176 0.0.0.0/0 /* default/kubernetes:https */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ tcp to:192.168.0.176:6443
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-HYE2IFAO6PORQFJR all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SEP-GJ4OJHBKIREWLMRS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */
2 2 120 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp to:192.168.103.146:443
Chain KUBE-SVC-EDNDUDH2C75GIR6O (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:https cluster IP */ tcp dpt:443
2 2 120 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */ tcp dpt:30598
3 2 120 KUBE-SEP-GJ4OJHBKIREWLMRS all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:https */
Chain KUBE-SEP-K2CVHZPTBE2YAD6P (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */ tcp to:192.168.103.146:8443
Chain KUBE-SVC-EZYNCFY2F7N6OQA2 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.103.242.141 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-K2CVHZPTBE2YAD6P all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller-admission:https-webhook */
Chain KUBE-SEP-S6VTWHFP6KEYRW5L (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.103.147:53
Chain KUBE-SEP-SFGZMYIS2CE4JD3K (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.103.149:53
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
2 0 0 KUBE-SEP-S6VTWHFP6KEYRW5L all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-SFGZMYIS2CE4JD3K all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SEP-IJUMPPTQDLYXOX4B (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.147 0.0.0.0/0 /* kube-system/kube-dns:dns */
2 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.103.147:53
Chain KUBE-SEP-C4W6TKYY5HHEG4RV (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.149 0.0.0.0/0 /* kube-system/kube-dns:dns */
2 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:192.168.103.149:53
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
2 0 0 KUBE-SEP-IJUMPPTQDLYXOX4B all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ statistic mode random probability 0.50000000000
3 0 0 KUBE-SEP-C4W6TKYY5HHEG4RV all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain KUBE-SEP-GX372II3CQAGUHFM (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.145 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */ tcp to:192.168.103.145:8443
Chain KUBE-SVC-CEZPIJSAUFW5MYPQ (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.166.112 /* kubernetes-dashboard/kubernetes-dashboard cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-GX372II3CQAGUHFM all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes-dashboard/kubernetes-dashboard */
Chain KUBE-SEP-I3RZS3REJP7POFLG (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.143 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */ tcp to:192.168.103.143:8080
Chain KUBE-SVC-H5K62VURUHBF7BRH (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.104.154.95 /* lens-metrics/kube-state-metrics:metrics cluster IP */ tcp dpt:8080
2 0 0 KUBE-SEP-I3RZS3REJP7POFLG all -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/kube-state-metrics:metrics */
Chain KUBE-SEP-ROTMHDCXAI3T7IOR (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.144 0.0.0.0/0 /* lens-metrics/prometheus:web */
2 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/prometheus:web */ tcp to:192.168.103.144:9090
Chain KUBE-SVC-MOZMMOD3XZX35IET (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.73.22 /* lens-metrics/prometheus:web cluster IP */ tcp dpt:80
2 0 0 KUBE-SEP-ROTMHDCXAI3T7IOR all -- * * 0.0.0.0/0 0.0.0.0/0 /* lens-metrics/prometheus:web */
Chain KUBE-SEP-OAYGOO6JHJEB65WC (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 192.168.103.146 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */
2 1 60 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp to:192.168.103.146:80
Chain KUBE-SVC-CG5I4G2RS3ZVWGLK (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.201.174 /* ingress-nginx/ingress-nginx-controller:http cluster IP */ tcp dpt:80
2 1 60 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp dpt:32181
3 1 60 KUBE-SEP-OAYGOO6JHJEB65WC all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */
Chain KUBE-PROXY-CANARY (0 references)
num pkts bytes target prot opt in out source destination
Chain cali-nat-outgoing (1 references)
num pkts bytes target prot opt in out source destination
1 49 3274 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:flqWnvo8yq4ULQLa */ match-set cali40masq-ipam-pools src ! match-set cali40all-ipam-pools dst random-fully
Chain cali-POSTROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-fip-snat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:Z-c7XtVd2Bq7s_hA */
2 1894 114K cali-nat-outgoing all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:nYKhEzDlr11Jccal */
3 0 0 MASQUERADE all -- * tunl0 0.0.0.0/0 0.0.0.0/0 /* cali:SXWvdsbh4Mw7wOln */ ADDRTYPE match src-type !LOCAL limit-out ADDRTYPE match src-type LOCAL random-fully
Chain cali-PREROUTING (1 references)
num pkts bytes target prot opt in out source destination
1 51 2688 cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:r6XmIziWUJsdOK6Z */
Chain cali-fip-snat (1 references)
num pkts bytes target prot opt in out source destination
Chain cali-OUTPUT (1 references)
num pkts bytes target prot opt in out source destination
1 1894 114K cali-fip-dnat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cali:GBTAv2p5CwevEyJm */
Chain cali-fip-dnat (2 references)
num pkts bytes target prot opt in out source destination
Chain KUBE-KUBELET-CANARY (0 references)
num pkts bytes target prot opt in out source destination

To clarify I am posting Community Wiki answer.
The problem existed only during forwarding to a k8s service NodePort.
To solve the problem you have set up an External Nginx as a TCP Proxy.
Here one can find documentation about External NGINX.
Ingress does not directly support TCP services, so some additional configuration is necessary. Your NGINX Ingress Controller may have been deployed directly (i.e. with a Kubernetes spec file) or through the official Helm chart. The configuration of the TCP pass through will differ depending on the deployment approach.

Related

Have unbound use only ipv6 transport

So I wanted to have Unbound use IPv6 transport only and not use IPv4 when doing lookups. (This is for fun. I want to see all the dns look ups done in IPv6 for educationsl purpose.)
My computer has IPv6 connectivity (can do curl -6) so I created an Unbound server.
Thought adding do-ip4: no was enough but I'm not getting anything from dig.
My prediction was that my computer is trying to use systemd-resolved instead of unbound but that has nothing to do with IPv6 so I guess not?
Here are my config files and tcpdump
With IPv4 enabled (working)
$ cat /etc/unbound/unbound.conf.d/myunbound.conf
server:
port: 53
verbosity: 0
num-threads: 2
outgoing-range: 512
num-queries-per-thread: 1024
msg-cache-size: 32m
interface: 0.0.0.0
interface: 2001:myipv6addr:20
rrset-cache-size: 64m
cache-max-ttl: 86400
infra-host-ttl: 60
infra-lame-ttl: 120
access-control: 127.0.0.0/8 allow
access-control: 0.0.0.0/0 allow
username: unbound
directory: "/etc/unbound"
use-syslog: no
hide-version: yes
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
# prefer-ip6: no
remote-control:
control-enable: yes
control-port: 953
control-interface: 0.0.0.0
tcpdump -n -vv port 53 -i any
for dig o.com
19:56:49.226291 IP6 (flowlabel 0xa269b, hlim 64, next-header UDP (17) payload length: 54) ::1.39522 > ::1.53: [bad udp cksum 0x0049 -> 0xa5ea!] 35213+ [1au] A? o.com. ar: . OPT UDPsize=4096 (46)
19:56:50.222460 IP (tos 0x0, ttl 64, id 49989, offset 0, flags [none], proto UDP (17), length 74)
127.0.0.1.52155 > 127.0.0.1.53: [bad udp cksum 0xfe49 -> 0x7690!] 35213+ [1au] A? o.com. ar: . OPT UDPsize=4096 (46)
19:56:50.222695 IP (tos 0x0, ttl 64, id 47599, offset 0, flags [none], proto UDP (17), length 62)
[myIPv4].41206 > 192.35.51.30.53: [bad udp cksum 0x4644 -> 0x5c52!] 46488% [1au] A? o.com. ar: . OPT UDPsize=4096 DO (34)
19:56:50.292985 IP (tos 0x0, ttl 54, id 26365, offset 0, flags [none], proto UDP (17), length 1153)
192.35.51.30.53 > [myIPv4].41206: [udp sum ok] 46488 NXDomain*- q: A? o.com. 0/8/1 ns: com. SOA a.gtld-servers.net. nstld.verisign-grs.com. 1653044192 1800 900 604800 86400, com. RRSIG, CK0POJMG874LJREF7EFN8430QVIT8BSM.com. Type50, CK0POJMG874LJREF7EFN8430QVIT8BSM.com. RRSIG, TE4S5DTC23DPH5M574GG84GG0Q86T3GM.com. Type50, TE4S5DTC23DPH5M574GG84GG0Q86T3GM.com. RRSIG, 3RL2Q58205687C8I9KC9MV46DGHCNS45.com. Type50, 3RL2Q58205687C8I9KC9MV46DGHCNS45.com. RRSIG ar: . OPT UDPsize=4096 DO (1125)
19:56:50.293239 IP (tos 0x0, ttl 64, id 49997, offset 0, flags [none], proto UDP (17), length 135)
127.0.0.1.53 > 127.0.0.1.52155: [bad udp cksum 0xfe86 -> 0xc2e6!] 35213 NXDomain q: A? o.com. 0/1/1 ns: com. SOA a.gtld-servers.net. nstld.verisign-grs.com. 1653044192 1800 900 604800 86400 ar: . OPT UDPsize=4096 (107)
With IPv4 disabled (not working)
$ cat /etc/unbound/unbound.conf.d/myunbound.conf
server:
port: 53
verbosity: 0
num-threads: 2
outgoing-range: 512
num-queries-per-thread: 1024
msg-cache-size: 32m
interface: 0.0.0.0
interface: 2001:myipv6addr:20
rrset-cache-size: 64m
cache-max-ttl: 86400
infra-host-ttl: 60
infra-lame-ttl: 120
access-control: 127.0.0.0/8 allow
access-control: 0.0.0.0/0 allow
username: unbound
directory: "/etc/unbound"
use-syslog: no
hide-version: yes
do-ip4: no
do-ip6: yes
do-udp: yes
do-tcp: yes
# prefer-ip6: no
remote-control:
control-enable: yes
control-port: 953
control-interface: 0.0.0.0
tcpdump
dig k.com
20:02:32.122198 IP6 (flowlabel 0x8897b, hlim 64, next-header UDP (17) payload length: 54) ::1.53805 > ::1.53: [bad udp cksum 0x0049 -> 0x676c!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
20:02:33.122126 IP (tos 0x0, ttl 64, id 59754, offset 0, flags [none], proto UDP (17), length 74)
127.0.0.1.35568 > 127.0.0.1.53: [bad udp cksum 0xfe49 -> 0xb0a8!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
20:02:38.126147 IP6 (flowlabel 0x8897b, hlim 64, next-header UDP (17) payload length: 54) ::1.53805 > ::1.53: [bad udp cksum 0x0049 -> 0x676c!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
20:02:39.122227 IP (tos 0x0, ttl 64, id 59906, offset 0, flags [none], proto UDP (17), length 74)
127.0.0.1.35568 > 127.0.0.1.53: [bad udp cksum 0xfe49 -> 0xb0a8!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
20:02:44.126099 IP6 (flowlabel 0x8897b, hlim 64, next-header UDP (17) payload length: 54) ::1.53805 > ::1.53: [bad udp cksum 0x0049 -> 0x676c!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
20:02:45.122363 IP (tos 0x0, ttl 64, id 60247, offset 0, flags [none], proto UDP (17), length 74)
127.0.0.1.35568 > 127.0.0.1.53: [bad udp cksum 0xfe49 -> 0xb0a8!] 15532+ [1au] A? k.com. ar: . OPT UDPsize=4096 (46)
Needed to add
interface: ::1
to the conf file

Reading pcap files and fetch dport number with specific ip address

I'm new to Python. I'm reading pcap file using scapy, i want to fetch dport number by specifying particular ip addresses, I have something like below
from scapy.all import *
pkts = rdpcap('example.pcap')
for pkt in pkts:
if IP in pkt:
ip_src=pkt[IP].src
ip_dst=pkt[IP].dst
if TCP in pkt:
tcp_sport=pkt[TCP].sport
tcp_dport=pkt[TCP].dport
print " IP src " + str(ip_src) + " TCP sport " + str(tcp_sport)
print " IP dst " + str(ip_dst) + " TCP dport " + str(tcp_dport)
if ( ( pkt[IP].src == "10.116.206.114") or ( pkt[IP].dst == "10.236.138.184") ):
print("!")
pcap file
required output
here in the above code i'm getting both results as shown below
IP src 10.116.206.114 TCP dport 443
IP dst 10.236.138.184 TCP dport 443
----
IP src 10.236.138.184 TCP dport 12516
IP dst 10.116.206.114 TCP dport 12516
.
.
so on, but i want only with specific src and dst ip which i specify like below i dont want both dport numbers.
IP src 10.116.206.114 TCP dport 443
IP dst 10.236.138.184 TCP dport 443
----
IP src 10.116.206.114 TCP dport 22
IP dst 10.236.138.184 TCP dport 22
Please suggest a method and explain how to fetch dport number from specific ip address. Thank you!

Docker container cannot connect to the Internet, ping works, wget fails

I am trying to find a solution for days now and ended up asking questions here...
I have Debian 10 with Docker installed, a container connects to the other containers without any problem, but I cannot figure out what needs to be done to access the Internet from the containers.
A container can do a ping and gets the replies:
docker run -i -t busybox ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=53 time=10.156 ms
64 bytes from 8.8.8.8: seq=1 ttl=53 time=10.516 ms
64 bytes from 8.8.8.8: seq=2 ttl=53 time=10.218 ms
64 bytes from 8.8.8.8: seq=3 ttl=53 time=10.487 ms
Unfortunately, when I try to use wget it fails:
docker run -i -t busybox wget -S -T 5 http://google.com
Connecting to google.com (216.58.209.14:80)
wget: download timed out
The containers DNS seem to be properly set up:
docker run -i -t busybox cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
OS details and docker version:
uname -a
Linux host1 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64 GNU/Linux
docker -v
Docker version 19.03.8, build afacb8b7f0
Docker bridge network details:
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "970f8f04c009361b831d8ff8b4fa6d223645aadbbe93a27576d4934c0a8710e0",
"Created": "2020-04-23T17:15:37.376767708+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
iptables are enabled and configured, however I have also tried with clear rules (ACCEPT all), still no luck:
iptables -nvL
Chain INPUT (policy DROP 484 packets, 40785 bytes)
pkts bytes target prot opt in out source destination
2 116 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 state NEW
2501 309K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
3 192 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1337
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
8 498 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 state NEW
10 640 ACCEPT all -- tun0 * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
70 4889 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
46 3449 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
14 1164 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
24 1607 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 678 ACCEPT all -- tun0 * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- tun0 ens192 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT all -- ens192 tun+ 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
Chain OUTPUT (policy ACCEPT 10 packets, 733 bytes)
pkts bytes target prot opt in out source destination
1782 1233K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * tun0 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
24 1607 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
46 3449 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
24 1440 REJECT tcp -- ens192 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
46 3449 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
24 1607 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 516 packets, 43250 bytes)
pkts bytes target prot opt in out source destination
290 14045 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 18 packets, 1101 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 10 packets, 744 bytes)
pkts bytes target prot opt in out source destination
10 590 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80
Chain OUTPUT (policy ACCEPT 9 packets, 666 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Any idea why my containers cannot connect to the outside world?
EDIT:
I have tried cleaning up completely my iptables rules and allow all traffic:
iptables -nvL
Chain INPUT (policy ACCEPT 12968 packets, 945K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 83 packets, 7850 bytes)
pkts bytes target prot opt in out source destination
iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 12871 packets, 939K bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 37 packets, 1856 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 29 packets, 2447 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 29 packets, 2447 bytes)
pkts bytes target prot opt in out source destination
iptables -t mangle -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
In such case even pings are not going out of the container:
docker run -i -t busybox ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss

aspNet core Linux 404 not found from LAN

I deployed my first .netCore application on Linux environment. Using Lubuntu 18.04.
I tried first with apache2, but since I had a problem reaching it from outside, I configured nginx and tried without much success to do it.
My application is running on port 5000 with dotnet command, as follow
usr:/inetpub/www/WebApi$ dotnet WebApi.dll --urls=http://:::5000/
Hosting environment: Production
Content root path: /inetpub/www/WebApi
Now listening on: http://[::]:5000
Application started. Press Ctrl+C to shut down.
And this is the Program.cs file where I read for the --url input parameter:
public class Program
{
public static void Main(string[] args)
{
XmlDocument log4netConfig = new XmlDocument();
log4netConfig.Load(File.OpenRead("log4net.config"));
ILoggerRepository repo = LogManager.CreateRepository(Assembly.GetEntryAssembly(),
typeof(log4net.Repository.Hierarchy.Hierarchy));
log4net.Config.XmlConfigurator.Configure(repo, log4netConfig["log4net"]);
//CreateWebHostBuilder(args).Build().Run();
if (args != null && args.Count() > 0)
{
var configuration = new ConfigurationBuilder()
.AddCommandLine(args)
.Build();
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseConfiguration(configuration)
.UseIISIntegration()
.UseStartup<Startup>()
.Build();
host.Run();
}
else
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseUrls("http://*:8080/")
.Build();
host.Run();
}
}
}
This is my default file inside nginx's sites-available folder.
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This is my nginx.conf file
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
This is my WebApi Core Startup.cs file
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
DapperExtensions.DapperExtensions.SqlDialect = new DapperExtensions.Sql.MySqlDialect();
ConnectionString connectionString = new ConnectionString();
connectionString._ConnectionString = new Parameters.AppSettingsParameter().getConnectionString();
services.AddSingleton<IConnectionString>(connectionString);
services.AddScoped<ICustomerRepository>(x => new Infrastructure.Dapper.EntitiesRepository.CustomerRepository(connectionString));
services.AddScoped<IDeviceRepository>(x => new Infrastructure.Dapper.EntitiesRepository.DeviceRepository(connectionString));
services.AddScoped<IWebApiVideoRepository>(x => new Infrastructure.Dapper.EntitiesRepository.WebApiVideoRepository(connectionString));
services.AddScoped<IMessageServiceTokenRepository>(x => new Infrastructure.Dapper.EntitiesRepository.MessageServiceTokenRepository(connectionString));
services.AddScoped<IPriceRepository>(x => new Infrastructure.Dapper.EntitiesRepository.PriceRepository(connectionString));
services.AddScoped<IServiceRepository>(x => new Infrastructure.Dapper.EntitiesRepository.ServiceRepository(connectionString));
services.AddScoped<IWebApiVideoDownloadFromDeviceRepository>(x => new Infrastructure.Dapper.EntitiesRepository.WebApiVideoDownloadFromDeviceRepository(connectionString));
services.AddScoped<IWebApiVideoValidationRefusedRepository>(x => new Infrastructure.Dapper.EntitiesRepository.WebApiVideoValidationRefusedRepository(connectionString));
services.AddScoped<ITokenKeyRepository>(x => new Infrastructure.Dapper.EntitiesRepository.TokenKeyRepository(connectionString));
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseMiddleware<RequestResponseLoggingMiddleware>();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Init}/{action=Initialize}");
});
}
}
If I go to localhost, I can ping the application running on 5000 port.
Going from another computer to 192.168.1.46 (my linux pc's address) gets the 404 error page.
This is the result from nmap command:
PORT STATE SERVICE
80/tcp open http
this is my iptable -L command:
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT tcp -- anywhere anywhere tcp dpt:http
2 ufw-before-logging-input all -- anywhere anywhere
3 ufw-before-input all -- anywhere anywhere
4 ufw-after-input all -- anywhere anywhere
5 ufw-after-logging-input all -- anywhere anywhere
6 ufw-reject-input all -- anywhere anywhere
7 ufw-track-input all -- anywhere anywhere
8 ACCEPT all -- anywhere anywhere
9 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 ufw-before-logging-forward all -- anywhere anywhere
2 ufw-before-forward all -- anywhere anywhere
3 ufw-after-forward all -- anywhere anywhere
4 ufw-after-logging-forward all -- anywhere anywhere
5 ufw-reject-forward all -- anywhere anywhere
6 ufw-track-forward all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
1 ufw-before-logging-output all -- anywhere anywhere
2 ufw-before-output all -- anywhere anywhere
3 ufw-after-output all -- anywhere anywhere
4 ufw-after-logging-output all -- anywhere anywhere
5 ufw-reject-output all -- anywhere anywhere
6 ufw-track-output all -- anywhere anywhere
Chain ufw-after-forward (1 references)
num target prot opt source destination
Chain ufw-after-input (1 references)
num target prot opt source destination
Chain ufw-after-logging-forward (1 references)
num target prot opt source destination
Chain ufw-after-logging-input (1 references)
num target prot opt source destination
Chain ufw-after-logging-output (1 references)
num target prot opt source destination
Chain ufw-after-output (1 references)
num target prot opt source destination
Chain ufw-before-forward (1 references)
num target prot opt source destination
Chain ufw-before-input (1 references)
num target prot opt source destination
Chain ufw-before-logging-forward (1 references)
num target prot opt source destination
Chain ufw-before-logging-input (1 references)
num target prot opt source destination
Chain ufw-before-logging-output (1 references)
num target prot opt source destination
Chain ufw-before-output (1 references)
num target prot opt source destination
Chain ufw-reject-forward (1 references)
num target prot opt source destination
Chain ufw-reject-input (1 references)
num target prot opt source destination
Chain ufw-reject-output (1 references)
num target prot opt source destination
Chain ufw-track-forward (1 references)
num target prot opt source destination
Chain ufw-track-input (1 references)
num target prot opt source destination
Chain ufw-track-output (1 references)
num target prot opt source destination
This is my netstat command:
Proto CodaRic CodaInv Indirizzo locale Indirizzo remoto Stato PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 21391/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 19096/nginx: master
tcp 0 0 0.0.0.0:55250 0.0.0.0:* LISTEN 17341/anydesk
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 738/systemd-resolve
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 29185/cupsd
tcp 0 0 0.0.0.0:7070 0.0.0.0:* LISTEN 17341/anydesk
tcp6 0 0 :::5000 :::* LISTEN 19464/dotnet
tcp6 0 0 :::80 :::* LISTEN 19096/nginx: master
tcp6 0 0 :::21 :::* LISTEN 1037/vsftpd
tcp6 0 0 ::1:631 :::* LISTEN 29185/cupsd
udp 0 0 0.0.0.0:60895 0.0.0.0:* 938/avahi-daemon: r
udp 0 0 127.0.0.53:53 0.0.0.0:* 738/systemd-resolve
udp 0 0 0.0.0.0:68 0.0.0.0:* 1691/dhclient
udp 0 0 0.0.0.0:631 0.0.0.0:* 29186/cups-browsed
udp 0 0 224.0.0.251:5353 0.0.0.0:* 29228/chrome
udp 0 0 224.0.0.251:5353 0.0.0.0:* 29228/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* 938/avahi-daemon: r
udp6 0 0 :::39611 :::* 938/avahi-daemon: r
udp6 0 0 :::5353 :::* 938/avahi-daemon: r
This is the log from this command: sudo tcpdump -i any tcp port 80 when I try to call my ip from another pc in LAN:
00:06:31.785311 IP 192.168.1.44.63326 > WebApi.http: Flags [F.], seq 1, ack 1, win 256, length 0
00:06:31.785407 IP WebApi.http > 192.168.1.44.63326: Flags [F.], seq 1, ack 2, win 229, length 0
00:06:31.785599 IP 192.168.1.44.63362 > WebApi.http: Flags [S], seq 1225666604, win 64240, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
00:06:31.785635 IP WebApi.http > 192.168.1.44.63362: Flags [S.], seq 4261901272, ack 1225666605, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
00:06:31.787248 IP 192.168.1.44.63327 > WebApi.http: Flags [P.], seq 461:921, ack 138, win 256, length 460: HTTP: GET / HTTP/1.1
00:06:31.787272 IP WebApi.http > 192.168.1.44.63327: Flags [.], ack 921, win 245, length 0
00:06:31.788867 IP WebApi.http > 192.168.1.44.63327: Flags [P.], seq 138:275, ack 921, win 245, length 137: HTTP: HTTP/1.1 404 Not Found
00:06:31.790175 IP 192.168.1.44.63326 > WebApi.http: Flags [.], ack 2, win 256, length 0
00:06:31.790513 IP 192.168.1.44.63362 > WebApi.http: Flags [.], ack 1, win 256, length 0
00:06:31.832376 IP 192.168.1.44.63327 > WebApi.http: Flags [.], ack 275, win 255, length 0
I'm struggling on that and I can't figure out how the hell I can make it work.
The only thing I can say is that if my dotnet application is running, I get the 404 error. If it's not running I get the 502 Bad Gateway error.
What the hell can I do to make it work?
PS: I added everything I thought at, if it misses something, feel free to ask for implementations
Thanks you all
Somehow I suppose a file got corrupted during the publish process; I deleted and copied back all files of my .netCore project and things started to work.
That said, I will keep this question since I think it shares some configurations that might be useful to someone else, since at this point I suppose those are correct :)
Thanks anyway for the support

Linux Iptables string module does not match all packets

This is the first time I'm using matching string module for iptables and met some strange behaviour I can't overcome.
In short, it looks like iptables passes "trimmed" packets to module, so module can not parse whole packet for string.
I cant find any information about such behaviour on net. All examples and tutors should work like a charm, but they dont.
Now in details.
OS: debian testing, kernel 3.2.0-3-686-pae
IPTABLES: iptables v1.4.14
OTHER:
tcpdump version 4.3.0,
libpcap version 1.3.0
# lsmod|grep ipt
ipt_LOG 12533 0
iptable_nat 12800 0
nf_nat 17924 1 iptable_nat
nf_conntrack_ipv4 13726 3 nf_nat,iptable_nat
nf_conntrack 43121 3 nf_conntrack_ipv4,nf_nat,iptable_nat
iptable_filter 12488 0
ip_tables 17079 2 iptable_filter,iptable_nat
x_tables 18121 6
ip_tables,iptable_filter,iptable_nat,xt_string,xt_tcpudp,ipt_LOG
I'v reseted all rules to default and added only two rules in following order:
iptables -t filter -A OUTPUT --protocol tcp --dport 80 --match string --algo bm --from 0 --to 1500 --string "/index.php" --jump LOG --log-prefix "matched :"
iptables -t filter -A OUTPUT --protocol tcp --dport 80 --jump LOG --log-prefix "OUT : "
The idea is obious - match requests to any ip to port 80 containing string /index.php and print them to log, and also log all the data passed to 80 port.
So the iptables -L -xvn :
Chain INPUT (policy ACCEPT 3 packets, 693 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3 packets, 184 bytes)
pkts bytes target prot opt in out source destination
0 0 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "/index.php" ALGO name bm TO 1500 LOG flags 0 level 4 prefix "matched :"
0 0 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 LOG flags 0 level 4 prefix "OUT : "
Counters for rules are zeroed.
Ok, now browser goes to www.gentoo.org/index.php. This url is just example for illustration.
So this is the only request url in browser.
And I get the following for iptables -t filter -L -xvn:
Chain INPUT (policy ACCEPT 61 packets, 16657 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 63 packets, 4394 bytes)
pkts bytes target prot opt in out source destination
1 380 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "/index.php" ALGO name bm TO 1500 LOG flags 0 level 4 prefix "matched :"
13 1392 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 LOG flags 0 level 4 prefix "OUT : "
So we have only ONE match for 1st rule. But that is wrong. Here is tcpdump for output for connection.
First ip request 89.16.167.134:
<handshake omitted>
16:56:38.440308 IP 192.168.66.106.54704 > 89.16.167.134.80: Flags [P.], seq 1:373, ack 1, win 913,
options [nop,nop,TS val 85359 ecr 3026115253], length 372
<...>
0x0030: b45e dab5 4745 5420 2f69 6e64 6578 2e70 .^..GET./index.p
0x0040: 6870 2048 5454 502f 312e 310d 0a48 6f73 hp.HTTP/1.1..Hos
0x0050: 743a 2077 7777 2e67 656e 746f 6f2e 6f72 t:.www.gentoo.or
0x0060: 670d 0a55 7365 722d 4167 656e 743a 204d g..User-Agent:.M
0x0070: 6f7a 696c 6c61 2f35 2e30 2028 5831 313b ozilla/5.0.(X11;
Here we see one match in HTTP GET. Ok some packets later we have request for some more content to ip 140.211.166.176:
16:56:38.772432 IP 192.168.66.106.59766 > 140.211.166.176.80: Flags [P.], seq 1:329, ack 1, win 913,
options [nop,nop,TS val 85442 ecr 110101513], length 328
<...>
0x0030: 0690 0409 4745 5420 2f20 4854 5450 2f31 ....GET./.HTTP/1
<...>
0x0130: 6566 6c61 7465 0d0a 436f 6e6e 6563 7469 eflate..Connecti
0x0140: 6f6e 3a20 6b65 6570 2d61 6c69 7665 0d0a on:.keep-alive..
0x0150: 5265 6665 7265 723a 2068 7474 703a 2f2f Referer:.http://
0x0160: 7777 772e 6765 6e74 6f6f 2e6f 7267 2f69 www.gentoo.org/i
0x0170: 6e64 6578 2e70 6870 0d0a 0d0a ndex.php....
Here we see "/index.php" again.
But LOG rule gives the following info:
kernel: [ 641.386182] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=60
kernel: [ 641.435946] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=52
kernel: [ 641.436226] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=424
kernel: [ 641.512594] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=52
kernel: [ 641.512762] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=52
kernel: [ 641.512819] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=89.16.167.134 LEN=52
kernel: [ 641.567496] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=60
kernel: [ 641.767707] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=52
kernel: [ 641.768328] matched :IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=380 <--
kernel: [ 641.768352] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=380
kernel: [ 641.990287] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=52
kernel: [ 641.990455] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=52
kernel: [ 641.990507] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=52
kernel: [ 641.990559] OUT : IN= OUT=eth0 SRC=192.168.66.106 DST=140.211.166.176 LEN=52
So we have match on only one packet, going to 140.211.166.176. But where is the first match?
The more strange for me is that on ither machine with ubuntu, it gives different counters, like 6 matches for string.
I'v also made same simple test on home notebook with ubuntu 12.04 and i get 2 clear matches.
Mb there is some kind of options to tune module data passing or smth?
UPDATE:
Very simple experiment mb if someone could explain that moment it will give a hint.
Now two pc, server centos 6.3 which listens on port 80 with netcat ans answers string:
# echo "List-IdServer"|nc -l 80
List-Id
And client (debian) connecting to server and sending data with nc and reading answer:
% echo "List-Id"|nc butorabackup 80
List-IdServer
After data exchange i have on SERVER SIDE:
Chain INPUT (policy ACCEPT 33 packets, 2164 bytes)
pkts bytes target prot opt in out source destination
1 60 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "List-Id" ALGO name bm TO 6500 LOG flags 0 level 4
5 276 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 LOG flags 0 level 4
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 23 packets, 4434 bytes)
pkts bytes target prot opt in out source destination
0 0 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:80 STRING match "List-Id" ALGO name bm TO 6500 LOG flags 0 level 4
5 282 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:80 LOG flags 0 level 4
and on CLIENT SIDE:
Chain INPUT (policy ACCEPT 28 packets, 2187 bytes)
pkts bytes target prot opt in out source destination
1 66 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:80 STRING match "List-Id" ALGO name bm TO 6500 LOG flags 0 level 4
5 282 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:80 LOG flags 0 level 4
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 23 packets, 1721 bytes)
pkts bytes target prot opt in out source destination
1 60 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 STRING match "List-Id" ALGO name bm TO 6500 LOG flags 0 level 4
5 276 LOG tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 LOG flags 0 level 4
So there is NO output rule match for server. Server matched IN, and client matched IN and OUT.
I dont understand how does it work :(
Tnx in advance.

Resources