Linux WiFI AP : refresh `iw dev wlan0 station dump` output (inactive time) - linux

I have a Linux (3.14.36) embedded board acting as a WiFi AP.
The WiFi chipset doesn't support monitoring mode.
My laptop(the client) is connected to this board by WiFi
The WiFi AP is acting as a network bridge to another computer, and doesn't provide an IP adress to the client (the WiFi AP only has the MAC address of the client)
I want to monitor the signal strengh of the connection WiFi AP <-> Client and be able to trigger a "refresh" of the signal strengh value.
Doing : iw dev wlan0 station dump gives me :
Station xx:xx:xx:xx:xx:xx (on wlan0)
inactive time: 123820 ms // <-- The problem
rx bytes: 10291
rx packets: 60
...
signal: -65 dBm // What I want to refresh
...
I understood that the signal strengh is updated every time there is a network activity. (So, in the example above, it has been refreshed 123s ago).
How can I force a refresh of this value ? (By forcing the AP to send "something" to the client for example) Knowing that the board/WiFi driver/WiFi device doesn't support tools such as iwconfig

For anyone finding this thread now:
I had this issue and my solution was to ping the device before doing the iw dump, e.g.
Get the list of of connected MAC addresses:
iw dev wlan0 station dump | grep 'signal' | awk '{print $2}'
Then get the IP address from these MAC addresses (alternatively you could use arp):
ip neigh | grep 'ma:ca:dd:re:ss:ss' | awk '{print $1}'
Then ping each of those:
ping -c 1 'IP.address'
Then get the refreshed signal for that MAC address
iw dev wlan0 station get 'ma:ca:dd:re:ss:ss' | grep 'signal' | awk '{print $2}'
I wrapped all this up in a Python script and it seemed to give reliable data.

I'll give it a try:
You're embedded so I guess you have busybox. You have no IP but you may then use arping (if this applet is not configured in your busybox build, change the config) to send something small and useless that may wake up the thing. What IP to use for your ARP requests? Well it seems you can use a "dummy" IP.
I'm running this on a PC but I do have busybox with its arping, and I use a wired interface, but here is the concept:
jbm#sumo:~/sandbox/iw$ sudo busybox arping -w 1 -U -I eth0 0.0.0.0
ARPING to 0.0.0.0 from 192.168.1.66 via eth0
Sent 2 probe(s) (2 broadcast(s))
Received 0 reply (0 request(s), 0 broadcast(s))
The useful thing is that though the "dummy" IP, I can check with tcpdump that the arp requests do actually go on the wire (or in the air in your case):
jbm#sumo:~$ sudo tcpdump -i eth0 -v arp
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:42:20.111100 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 0.0.0.0 (Broadcast) tell sumo, length 28
10:42:21.111206 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 0.0.0.0 (Broadcast) tell sumo, length 28
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel
So sending ARP request on your wireless interface may be enough to "wake up" your connection and refresh your RSSI.
EDIT:
See the interesting uses and properties of IP 0.0.0.0 here:
https://en.wikipedia.org/wiki/0.0.0.0
EDIT 2:
Re-thinking about it, I realized there will be a problem if your wireless interface does not have an IP itself. Which, if I'm not mistaking, may not necessarily be case in your bridging configuration. In such case, arping will not have a source address to build its request packets (nor will know how to listen for responses), and will fail.
But you can create your own "mini-unidirectional arping", using an AF_PACKET socket and build your own ARP request packet with a dummy/random source IP address. It will be unidirectional because the response to your forged ARP request, if any, would go to to the random source IP which may and preferably should not exist. But it the principle of just awaking your wireless connection by sending "something", that may do the trick.
For inspiration on how to code this "mini-unidirectional arping", have a look at busybox implementation from its udhcpc/udhcpd (it's simpler than the full-blown arping busybox applet):
https://git.busybox.net/busybox/tree/networking/udhcp/arpping.c#n38
The from_ip parameter is what you want to forge. You can use your actual MAC as from_mac, just for the sake of dignity :-) You don't even have to wait for a response (starting line 89), so that would be something like 50 lines of C code + a little main if you want to add a few options to it.

Related

How can I determine which is my IP address on a different network?

I'm trying to launch a SNMP query from a pod uploaded in an Azure cloud to an internal host on my company's network. The snmpget queries work well from the pod to, say, a public SNMP server, but the query to my target host results in:
root#status-tanner-api-86557c6786-wpvdx:/home/status-tanner-api/poller# snmpget -c public -v 2c 192.168.118.23 1.3.6.1.2.1.1.1.0
Timeout: No Response from 192.168.118.23.
an NMAP shows that the SNMP port is open|filtered:
Nmap scan report for 192.168.118.23
Host is up (0.16s latency).
PORT STATE SERVICE
161/udp open|filtered snmp
I requested a new rule to allow 161UDP from my pod, but I'm suspecting that I requested the rule to be made for the wrong IP address.
My theory is that I should be able to determine the IP address my pod uses to access this target host if I could get inside the target host, open a connection from the pod and see using netstat which is the IP address my pod is using. The problem is that I currently have no access to this host.
So, my question is How can I see from which address my pod is reaching the target host? Some sort of public address is obviously being used, but I can't tell which one is it without entering the target host.
I'm pretty sure I'm missing an important network tool that should help me in this situation. Any suggestion would be profoundly appreciated.
By default Kubernetes will use you node ip to reach the others servers, so you need to make a firewall rule using your node IP.
I've tested using a busybox pod to reach other server in my network
Here is my lab-1 node IP with ip 10.128.0.62:
$rabello#lab-1:~ ip ad | grep ens4 | grep inet
inet 10.128.0.62/32 scope global dynamic ens4
In this node I have a busybox pod with the ip 192.168.251.219:
$ kubectl exec -it busybox sh
/ # ip ad | grep eth0 | grep inet
inet 192.168.251.219/32 scope global eth0
When perform a ping test to another server in the network (server-1) we have:
/ # ping 10.128.0.61
PING 10.128.0.61 (10.128.0.61): 56 data bytes
64 bytes from 10.128.0.61: seq=0 ttl=63 time=1.478 ms
64 bytes from 10.128.0.61: seq=1 ttl=63 time=0.337 ms
^C
--- 10.128.0.61 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.337/0.907/1.478 ms
Using tcpdump on server-1, we can see the ping requests from my pod using the node ip from lab-1:
rabello#server-1:~$ sudo tcpdump -n icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:16:09.291714 IP 10.128.0.62 > 10.128.0.61: ICMP echo request, id 6230, seq 0, length 64
10:16:09.291775 IP 10.128.0.61 > 10.128.0.62: ICMP echo reply, id 6230, seq 0, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
Make sure you have an appropriate firewall rule to allow your node (or your vpc range) reach your destination and check if you VPN is up (if you have one).
I hope it helps! =)

Tunnel Gre problem between two hosts (vps and dedicated server)

Hello guys i need to resolve this problem (all server have installed centos 7): i'm trying to create a gre tunnel through vps (in Italy - OpenVZ) and a dedicated server (in Germany), but they do not communicate internally (ping and ssh command tests). Next i create a gre tunnel trought vps (in Italy - OpenVZ) and vps (in France - KVM OpenStack) and their communicate, i next i had create a tunnel trought vps (in France - KVM OpenStack) and a dedicated server (in Germany) their communicate. I can not understand why the vps (in Italy - OpenVZ) and the dedicated server (in Germany) do not communicate, ideas on how I can fix (
I also tried with iptables disabled, firewalld is not enable)? Thanks
In other words:
In other attempts (by this i mean that i managed to successfully create the GRE Tunnel between these machines):
The VPS (in France) and VPS (in Italy) communicate internally (ping and ssh command tests)
The VPS (in France) and Dedicated Server (in Germany) communicate internally (ping and ssh command tests)
Problem (by this i mean that i could not successfully create the GRE Tunnel between these machines):
The VPS (in Italy) and Dedicated Server (in Germany) do not communicate internally (ping and ssh command tests). I also asked hosting services if they had any restrinzione but nothing.
My configuration:
VPS command for tunnel:
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
iptunnel add gre1 mode gre local VPS_IP remote DEDICATED_SERVER_IP ttl 255
ip addr add 192.168.168.1/30 dev gre1 ip link set gre1 up
Dedicated server command for tunnel:
iptunnel add gre1 mode gre local DEDICATED_SERVER_IP remote VPS_IP ttl 255
ip addr add 192.168.168.2/30 dev gre1
ip link set gre1 up
[root#VPS ~]# ping 192.168.168.2
PING 192.168.168.2 (192.168.168.2) 56(84) bytes of data.
^C
--- 192.168.168.2 ping statistics ---
89 packets transmitted, 0 received, 100% packet loss, time 87999ms
[root#DE ~]# ping 192.168.168.1
PING 192.168.168.1 (192.168.168.1) 56(84) bytes of data.
^C
--- 192.168.168.1 ping statistics ---
92 packets transmitted, 0 received, 100% packet loss, time 91001ms
[root#VPS ~]# tcpdump -i venet0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on venet0, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes ^C 0 packets captured 1 packet received by filter 0 packets dropped by kernel
[root#DE ~]# tcpdump -i enp2s0 "proto gre" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp2s0, link-type EN10MB (Ethernet), capture size 262144 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel
[root#VPS ~]# lsmod | grep gre
ip_gre 4242 -2
ip_tunnel 4242 -2 sit,ip_gre
gre 4242 -2 ip_gre
[root#DE ~]# lsmod | grep gre
ip_gre 22707 0
ip_tunnel 25163 1 ip_gre
gre 13144 1 ip_gre
Console image with full command output
If ip_forwarding is required for the tunnel to work, you need to do /sbin/sysctl -p
And what does the output of ip tunnel show and ip route show on both the ends

Cant receive Hex response from Imatic board

Good day,
This is going to be long. I'm trying to communicate with the "SainSmart iMatic with RJ45" board, which is used together with the "SainSmart 16-Channel 12V Relay Module".
Basically, I'm able to send hex commands to the board, successfully, but can't receive a response from the board when required. What do I mean with this?
I have a laptop with Ubuntu 14.04.4 LTS connected directly to the board via an Ethernet Straight through cable (no longer crossover type needed). I have a configuration for this type of network (only two devices). The IP of the imatic board is fixed, 192.168.1.4 with port 3000. My laptop IP has a fixed IP of 192.168.1.2, with netmask 255.255.255.0, and no gateway.
I'm using netcat (in TCP protocol mode) in my laptop to send commands to the board in this format in terminal :
echo '580112000000016C' | xxd -r -p | nc 192.168.1.4 3000
How do I know it works? Well, basically the relays from a secondary board are turned on successfully ("SainSmart 16-Channel 12V Relay Module").
There's a list of hex commands to turn on and off each relay. In the previous instruction, I'm telling the board to turn on the relay number 1, leaving the other 15 off. The string '580112000000016C' is converted from hex into binary with xxd, and then sent into the netcat. This part works.
The only instruction that doesn't work is this one:
echo '580113000000006C' | xxd -r -p | nc 192.168.1.4 3000
This instruction only asks the board which relays are off an on at the moment, expecting a response in this format:
28 01 00 00 00 XX XX HH (XX XX 16 bit, each bit represents one relay state, "1" indicates on, "0" indicates OFF; HH is the sum of all previous data together, meaning it works as a checksum)
I've already tested and proved that, this is NOT an issue from the board. I wrote a code in visual basic, and windows was able to receive the response from the board, but something has to be wrong in my ubuntu configuration.
I've already disabled my firewall, ufw.
This is NOT a problem with the Ethernet cable.
I've already tried other command representations such as:
echo -n '5801100000000069' | xxd -r -p | nc -v -n -w3 192.168.1.4 3000 | xxd
I've already used netcat to scan all available ports in the board, and only the 3000 port is shown as available, as stated from the manufacturer.
This seems to be a network configuration problem though, but in windows, I specified the same IP, and netmask as in ubuntu.
What am I mising here?
Netcat is waiting for an EOF character, which is never sent by the iMatic board. That explains why netcat can't receive the response ever.
On the other hand, I wrote a python script (Python 2.7.6) which successfully receives the data from the iMatic board, after sending to it a certain instruction. Here it is:
import socket
import binascii
IPADDR = '192.168.1.4'
PORTNUM = 3000
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((IPADDR, PORTNUM))
data = '5801100000000069'.decode('hex')
s.send(data)
response= s.recv(8) #Buffer needs to be 8 for the fastest response without losing information
print binascii.hexlify(response)
s.close()
You can use this board now without a router, and directly connected to any computer through an Ethernet cable.
Regards,
Bernext.

How can I check whether a subnet IPv4 address is unused while IPv4 is switched off?

I've found sources that suggest how I can find whether an IP address is available using ping, arping and nmap, but all of these solutions fail when IPv4 is switched off. I'd like to find a way of automatically detecting whether an IPv4 address is available before assigning it to a new machine that does not have an IPv4 address. For example,
$ sudo arping 192.168.2.205
ARPING 192.168.2.205
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=0 time=730.931 msec
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=1 time=362.976 msec
60 bytes from 00:50:56:91:a5:0d (192.168.2.205): index=2 time=730.205 msec
^C
--- 192.168.2.205 statistics ---
4 packets transmitted, 3 packets received, 25% unanswered (0 extra)
$ sudo ifconfig eth0 0.0.0.0
$ sudo arping 192.168.2.205
arping: Unable to automatically find interface to use. Is it on the local LAN?
arping: Use -i to manually specify interface. Guessing interface eth0.
arping: Unable to get the IPv4 address of interface eth0:
arping: libnet_get_ipaddr4(): ioctl(): Cannot assign requested address
arping: Use -S to specify address manually.
Is there a way to achieve this?

Isolated test network on a Linux server running a web server (lightttpd) and (curl) clients

I'm writing a testing tool that requires known traffic to be captured from a NIC (using libpcap), then fed into the application we are trying to test.
What I'm attempting to set-up is a web server (in this case, lighttpd) and a client (curl) running on the same machine, on an isolated test network. A script will drive the entire setup, and the goal is to be able to specify a number of clients as well as a set of files for each client to download from the web server.
My initial approach was to simply use the loopback (lo) interface... run the web server on 127.0.0.1, have the clients fetch their files from http://127.0.0.1, and run my libpcap-based tool on the lo interface. This works ok, apart from the fact that the loopback interface doesn't emulate a real Ethernet interface. The main problem with that is that packets are all inconsistent sizes... 32kbytes and bigger, and somewhat random... it's also not possible to lower the MTU on lo (well, you can, but it has no effect!).
I also tried running it on my real interface (eth0), but since it's an internal web client talking to an internal web server, traffic never leaves the interface, so libpcap never sees it.
So then I turned to tun/tap. I used socat to bind two tun interfaces together with a tcp connection, so in effect, i had:
10.0.1.1/24 <-> tun0 <-socat-> tcp connection <-socat-> tun1 <-> 10.0.2.1/24
This seems like a really neat solution... tun/tap devices emulate real Ethernet devices, so i can run my web server on tun0 (10.0.1.1) and my capture tool on tun0, and bind my clients to tun1 (10.0.2.1)... I can even use tc to apply shaping rules to this traffic and create a virtual WAN inside my linux box... but it just doesn't work...
Here are the socat commands I used:
$ socat -d TCP-LISTEN:11443,reuseaddr TUN:10.0.1.1/24,up &
$ socat TCP:127.0.0.1:11443 TUN:10.0.2.1/24,up &
Which produces 2 tun interfaces (tun0 and tun1), with their respective IP addresses.
If I run ping -I tun1 10.0.1.1, there is no response, but when i tcpdump -n -i tun0, i see the ICMP echo requests making it to the other side, just no sign of the response coming back.
# tcpdump -i tun0 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes
16:49:16.772718 IP 10.0.2.1 > 10.0.1.1: ICMP echo request, id 4062, seq 5, length 64
<--- insert sound of crickets here (chirp, chirp)
So am I missing something obvious or is this the wrong approach? Is there something else i can try (e.g. 2 physical interfaces, eth0 and eth1???).
The easiest way is just to use 2 machines, but I want all of this self-contained, so it can all be scripted and automated on a single machine, without and other dependencies...
UPDATE:
There is no need for the 2 socats to be connected with a tcp connection, it's possible (and preferable for me) to do this:
socat TUN:10.0.1.1/24,up TUN:10.0.2.1/24,up &
The same problem still exists though...
OK, so I found a solution using Linux network namespaces (netns). There is a helpful article about how to use it here: http://code.google.com/p/coreemu/wiki/Namespaces
This is what I did for my setup....
First, download and install CORE: http://cs.itd.nrl.navy.mil/work/core/index.php
Next, run this script:
#!/bin/sh
core-cleanup.sh > /dev/null 2>&1
ip link set vbridge down > /dev/null 2>&1
brctl delbr vbridge > /dev/null 2>&1
# create a server node namespace container - node 0
vnoded -c /tmp/n0.ctl -l /tmp/n0.log -p /tmp/n0.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 0
ip link add name veth0 type veth peer name n0.0
ip link set n0.0 netns `cat /tmp/n0.pid`
vcmd -c /tmp/n0.ctl -- ip link set n0.0 name eth0
vcmd -c /tmp/n0.ctl -- ifconfig eth0 10.0.0.1/24 up
# start web server on node 0
vcmd -I -c /tmp/n0.ctl -- lighttpd -f /etc/lighttpd/lighttpd.conf
# create client node namespace container - node 1
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name veth1 type veth peer name n1.0
ip link set n1.0 netns `cat /tmp/n1.pid`
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.2/24 up
# bridge together nodes using the other end of each veth pair
brctl addbr vbridge
brctl setfd vbridge 0
brctl addif vbridge veth0
brctl addif vbridge veth1
ip link set veth0 up
ip link set veth1 up
ip link set vbridge up
This basically sets up 2 virtual/isolated/name-spaced networks on your Linux host, in this case, node 0 and node 1. A web server is started on node 0.
All you need to do now is run curl on node 1:
vcmd -c /tmp/n1.ctl -- curl --output /dev/null http://10.0.0.1
And monitor the traffic with tcpdump:
tcpdump -s 1514 -i veth0 -n
This seems to work quite well... still experimenting, but looks like it will solve my problem.

Resources