Send raw IP packet with tun device - linux

I'm trying to programmatically construct and send IP packet through TUN device.
I've setup the TUN device and proper routes:
# ip tuntap add mode tun tun0
# ip link set tun0 up
# ip addr add 10.0.0.2/24 dev tun0
which results in:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 600 0 0 wlp3s0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0
192.168.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
$ ifconfig tun0
tun0: flags=4241<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1500
inet 10.0.0.2 netmask 255.255.255.0 destination 10.0.0.2
inet6 fe80::f834:5267:3a1:5d1d prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
IP forwarding is ON: # echo 1 > /proc/sys/net/ipv4/ip_forward
I've setup NAT for tun0 packets:
# iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wlp3s0 -j MASQUERADE
# iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any wlp3s0 10.0.0.0/24 anywhere
Then I have python script to produce ICMP packets:
import os
from fcntl import ioctl
import struct
import time
import random
# pip install pypacker==4.0
from pypacker.layer3.ip import IP
from pypacker.layer3.icmp import ICMP
TUNSETIFF = 0x400454ca
IFF_TUN = 0x0001
IFF_NO_PI = 0x1000
ftun = os.open("/dev/net/tun", os.O_RDWR)
ioctl(ftun, TUNSETIFF, struct.pack("16sH", b"tun0", IFF_TUN | IFF_NO_PI))
req_nr = 1
req_id = random.randint(1, 65000)
while True:
icmp_req = IP(src_s="10.0.0.2", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
os.write(ftun, icmp_req.bin())
time.sleep(1)
req_nr += 1
I can see packets originating from tun0 interface:
# tshark -i tun0
1 0.000000000 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=1/256, ttl=64
2 1.001695939 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=2/512, ttl=64
3 2.003375319 10.0.0.2 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb673, seq=3/768, ttl=6
But wlp3s0 interface is silent, thus it seems that packets don't get NAT'ed and routed to wlp3s0 interface, which is my WLAN card.
Any ideas what I am missing?

I'm running Debian 9.
And turns out that packet forwarding was disabled - default policy for FORWARD chain is DROP:
# iptables -L -v
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
So I changed the policy:
# iptables -P FORWARD ACCEPT
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Also, I had to change the IP packet source address to something other than 10.0.0.2 which is the preferred source address for tun0 interface:
$ ip route
10.0.0.0/24 dev tun0 proto kernel scope link src 10.0.0.2
So I changed the packet source address to 10.0.0.4:
icmp_req = IP(src_s="10.0.0.4", dst_s="8.8.8.8", p=1) +\
ICMP(type=8) +\
ICMP.Echo(id=req_id, seq=req_nr, body_bytes=b"povilas-test")
Then kernel started forwarding packets coming from tun0 interface to the gateway interface:
# tshark -i wlp3s0
5 0.008428567 192.168.0.103 → 8.8.8.8 ICMP 62 Echo (ping) request id=0xb5c7, seq=9/2304, ttl=63
6 0.041114028 8.8.8.8 → 192.168.0.103 ICMP 62 Echo (ping) reply id=0xb5c7, seq=9/2304, ttl=48 (request in 5)
Also ping responses were sent back to tun0:
# tshark -i tun0
1 0.000000000 10.0.0.4 → 8.8.8.8 ICMP 48 Echo (ping) request id=0xb5c7, seq=113/28928, ttl=64
2 0.035470191 8.8.8.8 → 10.0.0.4 ICMP 48 Echo (ping) reply id=0xb5c7, seq=113/28928, ttl=47 (request in 1)

Related

Strongswan: packets received and decrypted correctly but not forwarded

I have a Lan-to-Lan vpn tunnel between Cisco CSR router and Strongswan. On Strongswan i see:
[root#ip-172-31-20-224 log]# strongswan status
Security Associations (1 up, 0 connecting):
tenant-13[2]: ESTABLISHED 66 minutes ago, 172.31.20.224[local_public_ip]...remote_public_ip[remote_public_ip]
tenant-13{3}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: cdf35340_i cb506e65_o
tenant-13{3}: 172.31.20.224/32 === remote_public_ip/32
tenant-13{147}: INSTALLED, TUNNEL, reqid 3, ESP in UDP SPIs: ca2c0328_i 0295d7bf_o
tenant-13{147}: 0.0.0.0/0 === 0.0.0.0/0
My crypto SA's allow for 0/0 -> 0/0. So all looks good.
I do receive encrypted packet on Strongswan and those are decrypted correctly, example: we can see that on virtual vti interface the udp packets are received (decrypted correctly):
[root#ip-172-31-20-224 log]# tcpdump -i vti13 -n udp port 3000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti13, link-type RAW (Raw IP), capture size 262144 bytes
11:19:57.834374 IP 192.168.1.116.54545 > X.X.X.X.hbci: UDP, length 340
Now X.X.X.X is a public ip address and those packets should be forwarded (out via eth0 using default routing), but i do not see those when looking via tcpdump:
[root#ip-172-31-20-224 log]# tcpdump -i eth0 -n host X.X.X.X
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
I have only one physical interface (eth0, transport for ipsec and default route) + one virtual (for decrypted traffic). So the traffic after decryption should be sent back out via the same eth0 interface:
[root#ip-172-31-20-224 log]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:ab:39:97:b0:7e brd ff:ff:ff:ff:ff:ff
inet 172.31.20.224/20 brd 172.31.31.255 scope global dynamic eth0
valid_lft 2673sec preferred_lft 2673sec
inet6 fe80::ab:39ff:fe97:b07e/64 scope link
valid_lft forever preferred_lft forever
3: ip_vti0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
9: vti13#NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 172.31.20.224 peer 89.68.162.135
inet 1.0.0.2/30 scope global vti13
valid_lft forever preferred_lft forever
inet6 fe80::5efe:ac1f:14e0/64 scope link
valid_lft forever preferred_lft forever
I have confirmed that:
routing is enabled
policy checks are disabled (sysctl -w net.ipv4.conf.default.rp_filter=0 and sysctl -w net.ipv4.conf.vti13.disable_policy=1)
iptables INPUT, OUTPUT, FORWARD was empty with ALLOW, but i have added specific rules also and see 0 hits:
[root#ip-172-31-20-224 log]# iptables -I INPUT -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -I FORWARD -i vti13 -j ACCEPT
[root#ip-172-31-20-224 log]# iptables -L -v -n
Chain INPUT (policy ACCEPT 9 packets, 1164 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- vti13 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 6 packets, 776 bytes)
pkts bytes target prot opt in out source destination
I have added entries to PREROUTING and POSTROUTING, just to check if i see those packets there and can confirm i can see those only in PREROUTING (so indeed the packet is not routed):
[root#ip-172-31-20-224 log]# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT 2 packets, 184 bytes)
pkts bytes target prot opt in out source destination
19192 25M DNAT udp -- vti13 * 0.0.0.0/0 0.0.0.0/0 udp dpt:3000 to:X.X.X.X:3000
I've tried to look via syslog (enabled kernel logging), but did not spot anything interesting.
What is the problem ? why my Linux is not forwarding those packets ?
Thanks,
OK, found the solution, as per https://docs.strongswan.org/docs/5.9/features/routeBasedVpn.html
had to disable charon.install_routes.

UDP packets received on veth, caught by tcpdump, accepted by iptables, but not forwarded to netcat

I have two namespaces srv1 and srv2, interconnected via a softswitch (p4 bmv2) with veth pairs. The softswitch does just simple forwarding. The veth interfaces inside the namespaces have IP addresses assigned to them (respectively 192.168.1.1 and 192.168.1.2). I could ping between the two namespaces using those IP addresses:
sudo ip netns exec srv1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=1.04 ms
but when I try netcat I dont receive messages on the server side:
client:
sudo ip netns exec srv1 netcat 192.168.1.2 80 -u
hello!
server:
sudo ip netns exec srv2 netcat -l 80 -u
The interface receives the packets with proper format.I verified with tcpdump on on both namespaces and I saw the packets being sent and received properly:
client:
sudo ip netns exec srv1 tcpdump -XXvv -i srv1p
[sudo] password for simo:
tcpdump: listening on srv1p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.088601 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
server:
sudo ip netns exec srv2 tcpdump -XXvv -i srv2p
tcpdump: listening on srv2p, link-type EN10MB (Ethernet), capture size 262144 bytes
^C06:09:41.089232 IP (tos 0x0, ttl 64, id 14169, offset 0, flags [DF], proto UDP (17), length 35)
192.168.1.1.55080 > 192.168.1.2.http: [bad udp cksum 0x8374 -> 0x5710!] UDP, length 7
0x0000: 00aa bbcc dd02 00aa bbcc dd01 0800 4500 ..............E.
0x0010: 0023 3759 4000 4011 801d c0a8 0101 c0a8 .#7Y#.#.........
0x0020: 0102 d728 0050 000f 8374 6865 6c6c 6f21 ...(.P...thello!
0x0030: 0a .
1 packet captured
1 packet received by filter
0 packets dropped by kernel
I added on srv2 iptable rules to ACCEPT udp packets on port 80 and to LOG:
sudo ip netns exec srv2 iptables -t filter -A INPUT -p udp --dport 80 -j ACCEPT
sudo ip netns exec srv2 iptables -I INPUT -p udp --dport 80 -j LOG --log-prefix " IPTABLES " --log-level=debug
I could see the stats increasing on the entry and the packets being logged on /var/log/kern.log, but the message never reachets the netcats listening socker.
sudo ip netns exec srv2 iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1 33 LOG udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80 LOG flags 0 level 7 prefix " IPTABLES "
4 133 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:80
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
kernel logs:
kernel: [581970.306032] IPTABLES IN=srv2p OUT= MAC=00:aa:bb:cc:dd:02:00:aa:bb:cc:dd:01:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=33 TOS=0x00 PREC=0x00 TTL=64 ID=51034 DF PROTO=UDP SPT=48784 DPT=80 LEN=13
When I replace the softswitch with a bridge the netcat works. I thought maybe the softswitch corrputs the packets but the tcpdump shows the right format. The UDP checksum is not correct but it is generated like that from the source server, and it is the same thing when using the linux bridge anyways but it works in that case. Is ther a way to know the reason those packets do not reach the netcat server ?

Odroidh2 Debian - Unable to ping network gateway / no network connectivity

I have an OdroidH2 with docker setup.
It was working fine for a few months and suddenly, out of nowhere it stopped having any internet/intranet connectivity.
It's connectivity is going through an Ethernet cable, not WiFi and the interface that is supposed to have the connection is enp3s0 with an ip address of 192.168.1.100.
I have performed the following troubleshooting steps:
Restart (of course, always the first step)
Checked interface settings via ifconfig and also in /etc/network/interfaces
Checked the routing via route -n
Checked iptables (iptables was populated with the docker configuration, I've flushed the iptables including nat and mangle and set the default policy to ACCEPT for input, forward and output. Restarted the networking service afterwards)
Checked if it was able to ping itself and the default gateway (it is able to ping itself but not the gateway, or any other devices)
Checked if another device was able to ping the OdroidH2 (host unreachable)
Checked dmesg and for some reason, I had 2 firmwares that were not able to be loaded (already installed and rebooted after installation):
rtl_nic/rtl8168g-2.fw (after checking, this is the firmware for the network interfaces)
i915/glk_dmc_ver1_04.bin (didn't research much about this one, something to do with runtime power management??)
After all of these steps, I still am unable to get the network connectivity going.
Below you can find information regarding my current configuration:
dmesg output
Stackoverflow does not allow me to put all the information from my dmesg output so I had to put it on google drive: dmesg_output
/etc/hosts
127.0.0.1 localhost
192.168.1.100 dc1 dc1.samdom.andrewoliverhome.local samdom.andrewoliverhome.local
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
iptables -nvL output (after clearing and reloading the networking service)
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
/etc/resolv.conf
#nameserver 127.0.0.1
#nameserver 8.8.8.8
#nameserver 8.8.4.4
search samdom.andrewoliverhome.local
#domain samdom.andrewoliverhome.local
nameserver 192.168.1.100
nameserver 8.8.8.8
route -n output
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 enp3s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker_gwbridge
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-debc10cb5b21
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0
/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo enp2s0 enp3s0
#auto lo br0
iface lo inet loopback
# The primary network interface
iface enp2s0 inet dhcp
allow-hotplug enp2s0 enp3s0
#iface enp2s0 inet manual
# post-up iptables-restore < /etc/iptables.up.rules
# This is an autoconfigured IPv6 interface
#iface enp2s0 inet dhcp
iface enp3s0 inet static
address 192.168.1.100
netmask 255.255.255.0
# broadcast 169.254.99.255
network 192.168.1.0
gateway 192.168.1.254
#iface enp2s0 inet manual
#iface enp3s0 inet manual
#iface br0 inet static
# bridge_ports enp2s0 enp3s0
# address 192.168.1.100
# broadcast 192.168.1.255
# netmask 255.255.255.0
# gateway 192.168.1.254
#
In /etc/resolv.conf, the reason I have the primary nameserver to be itself is because I am running a docker container that is serving as a samba-ad-dc.
In order for OdroidH2 to find all of my devices in the domain, it needs to make dns queries to the samba dc, if samba is not able to find a dns record, it will autoforward it to 8.8.8.8.
Any help would be greatly appreciated (:
After all the troubleshooting done, the issue is not within the OdroidH2 itself, it was with my router.
The LAN port that I'm using malfunctioned. I switched the Ethernet cable to a different LAN port and it worked.

Iptables NAT one-to-one

I use linux serve Fedora 4.14.33-51.37.amzn1.x86_64. I want use NAT 1-to-1.
For example Is it same problem
My scheme is:
My server has two network interfaces.
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:8a:59:b9:2d:b8 brd ff:ff:ff:ff:ff:ff
inet 172.10.1.72/25 brd 172.10.1.127 scope global eth0
valid_lft forever preferred_lft forever
inet 172.10.1.32/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.39/25 brd 172.10.1.127 scope global secondary eth0
valid_lft forever preferred_lft forever
inet 172.10.1.101/25 brd 172.10.1.127 scope global secondary eth0
eth1:
inet 172.10.1.246/28 brd 172.10.1.255 scope global eth1
net.ipv4.ip_forward = 1
How can work NAT
(eth0)172.10.1.101 - (server1)192.168.1.10
(eth0)172.10.1.32 - (server2)192.168.1.11
and etc ...
route table on NAT server
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.10.1.1 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 172.10.1.241 0.0.0.0 UG 10001 0 0 eth1
10.0.0.0 172.10.1.1 255.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
172.10.1.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0
172.10.1.240 0.0.0.0 255.255.255.240 U 0 0 0 eth1
192.168.1.0 172.10.1.241 255.255.255.240 UG 0 0 0 eth1
My currently iptables settings:
iptables -nvL
Chain INPUT (policy ACCEPT 1726 packets, 115K bytes)
pkts bytes target prot opt in out source destination
1827 121K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
664 55128 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0
0 0 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
Chain OUTPUT (policy ACCEPT 2123 packets, 668K bytes)
pkts bytes target prot opt in out source destination
2123 668K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
and
iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
8 528 DNAT all -- eth0 * 0.0.0.0/0 172.10.1.101 to:192.168.1.10
Chain INPUT (policy ACCEPT 36 packets, 2476 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 195 packets, 14344 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 202 packets, 14788 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT all -- * eth0 192.168.1.10 0.0.0.0/0 to:172.10.1.101
When i try check my NAT server as telnet 172.10.1.101 4016 I have error
telnet: connect to address 172.10.1.101: Connection timed out
My server 192.168.1.10 listened port 4016.
On my NAT server I don't have logs.
But I try connect to another ip on my eth0 interface and saw in log
Jun 18 15:04:39 ip-172-10-1-72 kernel: [ 1245.059113] IN= OUT=eth0 SRC=172.10.1.39 DST=10.68.72.90 LEN=40 TOS=0x10 PREC=0x00 TTL=255 ID=57691 DF PROTO=TCP SPT=4016 DPT=47952 WINDOW=0 RES=0x00 ACK RST URGP=0
10.68.72.90 it is my server before NAT. After added roles in iptables I don't able to ping my ip 172.10.1.101.
I had right rules.
iptables -t nat -A POSTROUTING -s 192.168.1.11 -o eth0 -j SNAT --to-source 172.10.1.32
iptables -t nat -A PREROUTING -d 172.10.1.32 -i eth0 -j DNAT --to-destination 192.168.1.11
The problem was in AWS. I should turn of for NAT server "Check source" in interface menu. And all static routs must be in Route Table for subnet, where is my server located, not be on the servers 192.168.1.10 and 192.168.1.11.

ip_forward doesn't works for tcp

I have one linux box with 2 interfaces, wlan0 and wlan1. wlan0 connects to an external wlan AP of subnet 192.168.43.x. On wlan1 a softap is running on subnet 192.168.60.x.
I have run following commands to start nat and ip_forward:
iptables -F
iptables -F INPUT
iptables -F OUTPUT
iptables -F FORWARD
iptables -t nat -F
iptables -t mangle -F
iptables -A INPUT -j ACCEPT
iptables -A OUTPUT -j ACCEPT
iptables -A FORWARD -j ACCEPT
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
Another pc e6220 is connected to 192.168.60.x softap. I want to forward pc's traffic to external network 192.168.43.x.
network graph:
external AP (192.168.43.1) <~~~> {wlan0(192.168.43.2), <-- forward --> internal AP on wlan1(192.168.60.1)} <~~~> e6220 (192.168.60.46)
After adding route on pc, icmp and udp works fine, both can reach 192.168.43.1, but tcp doesn't work!
userg#e6220:~ $ ping 192.168.43.1
PING 192.168.43.1 (192.168.43.1) 56(84) bytes of data.
64 bytes from 192.168.43.1: icmp_seq=1 ttl=63 time=9.74 ms
64 bytes from 192.168.43.1: icmp_seq=2 ttl=63 time=9.57 ms
^C
user#e6220:~ $ telnet 192.168.43.1
Trying 192.168.43.1...
telnet: Unable to connect to remote host: Network is unreachable
Capture on linux box wlan1:
17:10:57.579994 IP 192.168.60.46.56412 > 192.168.43.1.telnet: Flags [S], seq 1764532258, win 29200, options [mss 1460,sackOK,TS val 2283575078 ecr 0,nop,wscale 7], length 0
17:10:57.580075 IP 192.168.60.1 > 192.168.60.46: ICMP net 192.168.43.1 unreachable, length 68
No packets seen on wlan0.
It's strange kernel says unreachable, but UDP works.
// route -n output
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.43.1 0.0.0.0 UG 0 0 0 wlan0
192.168.43.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
192.168.60.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan1
I think this may be kernel-version specific. On Linux 3.4.x both tcp and udp are good. This issue is seen on Linux 3.10.61. I want to know if there's any way to fix it, kernel configuration or iptables command. Thanks!
Finally I got the solution.
The routing box is configured with CONFIG_IP_MULTIPLE_TABLES, which is policy routing. The routing rule is:
# ip rule list
0: from all lookup local
10000: from all fwmark 0xc0000/0xd0000 lookup legacy_system
13000: from all fwmark 0x10063/0x1ffff lookup local_network
15000: from all fwmark 0x0/0x10000 lookup legacy_system
16000: from all fwmark 0x0/0x10000 lookup legacy_network
17000: from all fwmark 0x0/0x10000 lookup local_network
23000: from all fwmark 0x0/0xffff uidrange 0-0 lookup main
32000: from all unreachable
So every packet goes to "unreachable". TCP turns good after adding following rules:
# ip rule add iif wlan0 table main
# ip rule add iff wlan1 table main
# ip rule list
0: from all lookup local
9998: from all iif wlan0 lookup main
9999: from all iif wlan1 lookup main
10000: from all fwmark 0xc0000/0xd0000 lookup legacy_system
13000: from all fwmark 0x10063/0x1ffff lookup local_network
15000: from all fwmark 0x0/0x10000 lookup legacy_system
16000: from all fwmark 0x0/0x10000 lookup legacy_network
17000: from all fwmark 0x0/0x10000 lookup local_network
23000: from all fwmark 0x0/0xffff uidrange 0-0 lookup main
32000: from all unreachable
But I still don't know why UDP is good before adding these 2 rules.

Resources