I would like to use a Raspberry Pi with an attached LTE Hat as an IoT device.
Currently I have managed to use ppp to get the LTE modem working. I've got IPv4 and IPv6 addresses and it is possible to ping from the device and to send data to an MQTT broker. At the end it would be fine to be able to log into the device using ssh. But currently I have trouble to ping the device or to get a working ssh connection to it over ppp.
I assume that something with the routing might be wrong. For this reason I post the current routing table of the IPv6:
route -6n
Kernel IPv6 routing table
Destination Next Hop Flag Met Ref Use If
::1/128 :: U 256 2 0 lo
2003:f5:cf00:1700::/64 :: U 202 3 0 eth0
2a02:3037:414:b1d4::/64 :: UA 256 1 0 ppp0
fe80::de6:3cd1:5bc0:1436/128 :: U 256 1 0 ppp0
fe80::xxxx:xxxx:xxxx:beb8/128 :: U 256 1 0 ppp0
fe80::/64 :: U 256 1 0 eth0
::/0 fe80::464e:6dff:fe5d:e7fc UG 202 2 0 eth0
::/0 fe80::de6:3cd1:5bc0:1436 UGDAe 1024 1 0 ppp0
::1/128 :: Un 0 6 0 lo
2003:f5:cf00:1700:xxxx:xxxx:xxxx:a761/128 :: Un 0 3 0 eth0
2003:f5:cf00:1700:xxxx:xxxx:xxxx:549f/128 :: Un 0 5 0 eth0
2a02:3037:414:b1d4:xxxx:xxxx:xxxx:beb8/128 :: Un 0 2 0 ppp0
fe80::xxxx:xxxx:xxxx:a761/128 :: Un 0 4 0 eth0
fe80::xxxx:xxxx:xxxx:beb8/128 :: Un 0 3 0 ppp0
ff00::/8 :: U 256 5 0 eth0
ff00::/8 :: U 256 3 0 ppp0
::/0 :: !n -1 1 0 lo
The following commands fail (if entered on a different device ...)
ping -6 2a02:3037:414:b1d4:xxxx:xxxx:xxxx:beb8
ssh 2a02:3037:414:b1d4:xxxx:xxxx:xxxx:beb8
Would be nice if someone could give me a hint.
Thanks
Related
In centos 8(4.18.0-193.28.1.el8_2.x86_64) if I try this command:
ip link add team0:1 type dummy
I will get a message: "RTNETLINK answers: Invalid argument".
However if I do the same thing on Centos 7(3.10.0-862.14.4.el7.x86_64), there is no error !
And If I cat /proc/net/dev I can see there is a sub interface:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
eth0: 2489766199 690315 0 0 0 0 0 0 96022876 713917 0 0 0 0 0 0
lo: 402582904 916378 0 0 0 0 0 0 402582904 916378 0 0 0 0 0 0
team0:1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
What is wrong ? Is there a bug in Centos 7? Shouldn't /proc/net/dev show net interfaces only with no sub interfaces?
I couldn't find an answer after doing all the searchings and googlings, to those who has a better understanding on this: please be so kind and help me ~😿
from 10.18.90.139 to 10.18.90.254, using normal icmp protocol with scapy via python gets no answer; but ping gets reply, what could be the reason
Tried to ping an IP via scapy
>>> ip = "10.18.90.254"
>>> from scapy.all import sr1, IP, ICMP
>>> sr1(IP(ip/ICMP()))
Begin emission:
.......Finished sending 1 packets.
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................^C
Received 1073 packets, got 0 answers, remaining 1 packets
Checked there is no proxy
[root# run]# env | grep -i pro
[root# run]# env | grep -i ht
But ping works fine
PING 10.18.90.254 (10.18.90.254) 56(84) bytes of data.
64 bytes from 10.18.90.254: icmp_seq=1 ttl=64 time=0.315 ms
64 bytes from 10.18.90.254: icmp_seq=2 ttl=64 time=0.264 ms
^C
--- 10.18.90.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1462ms
rtt min/avg/max/mdev = 0.264/0.289/0.315/0.030 ms
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
10.18.90.0 0.0.0.0 255.255.255.0 U 0 0 0 eth4
10.9.67.0 0.0.0.0 255.255.255.0 U 0 0 0 eth5
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth5
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth4
135.0.0.0 10.9.67.1 255.0.0.0 UG 0 0 0 eth5
0.0.0.0 10.18.90.254 0.0.0.0 UG 0 0 0 eth4
Try using something like this:
sr1(IP(dst="10.18.90.254") / ICMP())
Pinging a system in the internet works:
root#553c9e5ce5ea:/# ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=9.61 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 9.619/9.619/9.619/0.000 ms
Pinging a system in the internal (corporate) network, does not work:
root#553c9e5ce5ea:/# ping -c 1 10.97.179.110
PING 10.97.179.110 (10.97.179.110) 56(84) bytes of data.
--- 10.97.179.110 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
The routing in the container is straightforward:
root#553c9e5ce5ea:/# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
Pinging the same system from the docker host is working fine:
host » ping -c 1 10.97.179.110
PING 10.97.179.110 (10.97.179.110) 56(84) bytes of data.
64 bytes from 10.97.179.110: icmp_seq=1 ttl=60 time=4.70 ms
--- 10.97.179.110 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.703/4.703/4.703/0.000 ms
The system is reachable via a vpn interface in the docker host:
host » route -n | grep '^10\.'
10.0.0.0 0.0.0.0 255.248.0.0 U 0 0 0 tunsnx
10.8.0.0 0.0.0.0 255.252.0.0 U 0 0 0 tunsnx
10.12.0.0 0.0.0.0 255.254.0.0 U 0 0 0 tunsnx
10.14.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tunsnx
10.15.0.0 0.0.0.0 255.255.240.0 U 0 0 0 tunsnx
10.15.20.0 0.0.0.0 255.255.252.0 U 0 0 0 tunsnx
10.15.24.0 0.0.0.0 255.255.248.0 U 0 0 0 tunsnx
10.15.32.0 0.0.0.0 255.255.224.0 U 0 0 0 tunsnx
10.15.64.0 0.0.0.0 255.255.192.0 U 0 0 0 tunsnx
10.15.113.108 0.0.0.0 255.255.255.255 UH 0 0 0 tunsnx
10.15.128.0 0.0.0.0 255.255.128.0 U 0 0 0 tunsnx
10.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 tunsnx
10.32.0.0 0.0.0.0 255.224.0.0 U 0 0 0 tunsnx
10.64.0.0 0.0.0.0 255.192.0.0 U 0 0 0 tunsnx
10.128.0.0 0.0.0.0 255.128.0.0 U 0 0 0 tunsnx
Why is the host not properly routing the traffic coming from the docker container?
EDIT
I have been checking the IP tables counters, and this is what I see after ping -c 100 (a host on the VPN):
» sudo iptables -x -v --line-numbers -L FORWARD | head
Chain FORWARD (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 100 8400 DOCKER-ISOLATION all -- any any anywhere anywhere
2 0 0 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 0 0 DOCKER all -- any docker0 anywhere anywhere
4 100 8400 ACCEPT all -- docker0 !docker0 anywhere anywhere
5 0 0 ACCEPT all -- docker0 docker0 anywhere anywhere
6 0 0 ACCEPT all -- any br-ad32be160e27 anywhere anywhere ctstate RELATED,ESTABLISHED
7 0 0 DOCKER all -- any br-ad32be160e27 anywhere anywhere
8 0 0 ACCEPT all -- br-ad32be160e27 !br-ad32be160e27 anywhere anywhere
Pinging a host on the internet (8.8.8.8) gives this:
» sudo iptables -x -v --line-numbers -L FORWARD | head
Chain FORWARD (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 200 16800 DOCKER-ISOLATION all -- any any anywhere anywhere
2 100 8400 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 0 0 DOCKER all -- any docker0 anywhere anywhere
4 100 8400 ACCEPT all -- docker0 !docker0 anywhere anywhere
5 0 0 ACCEPT all -- docker0 docker0 anywhere anywhere
6 0 0 ACCEPT all -- any br-ad32be160e27 anywhere anywhere ctstate RELATED,ESTABLISHED
7 0 0 DOCKER all -- any br-ad32be160e27 anywhere anywhere
8 0 0 ACCEPT all -- br-ad32be160e27 !br-ad32be160e27 anywhere anywhere
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a simple network with three Linux systems running CentOS 2.6.
Linux 1 (eth1: 192.138.14.1)----- (eth4:192.138.14.4) Linux 2 (eth2: 192.138.4.3)------(eth3: 192.138.4.2) Linux 3
I am unable to ping Linux 3 from Linux 1. What I am able to ping though is from Linux 1 to Linux 2 (eth2) and from Linux 3 to Linux 2 (eth4). This means from Linux 1, I am able to ping 192.138.4.3 but not 192.138.4.2.
Following is the output of route -n command in Linux1
Linux1# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.138.14.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.138.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
10.135.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
169.254.0.0 0.0.0.0 255.255.0.0 U 1005 0 0 eth3
0.0.0.0 10.135.18.1 0.0.0.0 UG 0 0 0 eth0
In Linux 2:
Linux2# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.138.15.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.138.14.0 192.138.14.4 255.255.255.0 UG 0 0 0 eth4
192.138.14.0 0.0.0.0 255.255.255.0 U 0 0 0 eth4
192.138.4.0 192.138.4.3 255.255.255.0 UG 0 0 0 eth2
192.138.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
10.135.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.138.16.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1
169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 eth2
169.254.0.0 0.0.0.0 255.255.0.0 U 1005 0 0 eth3
169.254.0.0 0.0.0.0 255.255.0.0 U 1006 0 0 eth4
0.0.0.0 10.135.18.1 0.0.0.0 UG 0 0 0 eth0
In Linux 3:
Linux3# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.138.14.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
192.138.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
10.135.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1005 0 0 eth3
0.0.0.0 10.135.18.1 0.0.0.0 UG 0 0 0 eth0
I have enabled IP forwarding in Linux 2
Linux2# vi /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
Linux2#: sysctl -p
sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_sack = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
The result of iptables -L in Linux 2:
Linux2# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
To ping Linux3 from Linux 1, should I be adding specific rules for icmp in iptables ? If not, What am I missing ?
I guess the problem is that you're not on the same network. When linux1 tries to send a packet to 192.138.4.2 it looks at the routing table and sees that it should go to eth1. But it also sees that there is no GW, so it assumes that the packet is on the same network. So it sends an arp request for 192.138.4.2 but receives no answer.
You can verify my assumption by running "tcpdump -i eth1 arp" on linux 1 and see that you send a request and see no response. You can also just type 'arp' and see that you have an incomplete entry.
So basically your routing table should include a GW where packets are intended to be routed.
For example, instead of
192.138.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
It should be something like
192.138.4.0 192.138.14.4 255.255.255.0 UG 0 0 0 eth1
And same on the other side.
is there a way in Linux to simulate slow traffic inbound to my server at a specific port? I looked at NETEM but it only seems to WAN wide.
An example of limiting all traffic matching tcp (protocol 6) destination port of 54000 at 256Kbits inbound to eth0, using tc...
As root...
tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50 u32 \
match ip protocol 6 0xff \
match ip dport 54000 0xffff police rate 256kbit burst 10k drop \
flowid :1
You can monitor it like this... notice the dropped number for ffff, below
[mpenning#Bucksnort ~]$ sudo tc -s qdisc show
qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 17796311917 bytes 5850423 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
Sent 140590 bytes 1613 pkt (dropped 214, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
[mpenning#Bucksnort ~]$
To delete all ingress traffic filters:
tc qdisc del dev eth0 ingress
Have a look at JMeter. Depending on what type of traffic you need, it may already provide the functionality.