I'm running tshark to capture stuff, then piping the output into awk to process it a bit. When I run the tshark command without the awk, I see the output in the terminal as it happens. When I pipe it into awk (I'm using -l on tshark), actually here it is:
tshark -i wlan1 -l subtype probereq | awk -f ./blah.awk
I see the counter go up as packages come in, but nothing happens. When it reaches 32, suddenly the first 32 lines appear, then nothing happens again. Can we get rid of this strange behaviour?
Ok, so tshark sample output after a few sec:
Running as user "root" and group "root". This could be dangerous.
Capturing on 'wlan1'
1 0.000000000 0c:08:b4:07:1c:41 → ff:ff:ff:ff:ff:ff 802.11 288 Probe Request, SN=100, FN=0, Flags=........, SSID=gigacube-E7CA
2 9.709432337 0c:08:b4:07:1c:41 → ff:ff:ff:ff:ff:ff 802.11 288 Probe Request, SN=117, FN=0, Flags=........, SSID=gigacube-E7CA
3 9.969377335 0c:08:b4:07:1c:41 → ff:ff:ff:ff:ff:ff 802.11 288 Probe Request, SN=119, FN=0, Flags=........, SSID=gigacube-E7CA
awk file:
BEGIN {
OFS = ",";
}
{
print $3, substr($13, 6);
}
The output of the awk script on the output of the tshak command IF i first send the tshark output into a file:
0c:08:b4:07:1c:41,gigacube-E7CA
0c:08:b4:07:1c:41,gigacube-E7CA
0c:08:b4:07:1c:41,gigacube-E7CA
66:3c:76:d8:29:b2,Wildcard
66:3c:76:d8:29:b2,Wildcard
66:3c:76:d8:29:b2,Wildcard
66:3c:76:d8:29:b2,Wildcard
... as expected.
But tshark -i wlan1 -l -n subtype probereq | awk -f ./blah.awk only outputs the counter for like a minute - until it reaches about 32, then outputs about 30 lines together. I'd like 2 things:
no counter
continuous output, as it comes out of tshark
Related
I have live raw json stream data from the virtual radar server I'm using.
I use Netcat to fetch the data and jq to save it on my kali linux. using the following command.
nc 127.0.0.1 30006 | jq > A7.json
But i want to filter specific content from the data stream.
i use the following command to extract the data.
cat A7.json | jq '.acList[] | select(.Call | contains("QTR"))' - To fetch Selected airline
But i realized later that the above command only works once. in other words, it does not refresh. as the data is updating every second. i have to execute the command over and over again to extract the filter data which is causing to generate duplicate data.
Can someone help me how to filter the live data without executing the command over and over .
As you don't use the --stream option, I suppose your document is a regular JSON document.
To execute your command every second, you can implement a loop that sleeps for 1 second:
while true; do sleep 1; nc 127.0.0.1 30006 | jq '.acList[] | select(…)'; done
To have the output on the screen and also save to a file (like you did with A7.json), you can add a call to tee:
# saves the document as returned by `nc` but outputs result of `jq`
while true; do sleep 1; nc 127.0.0.1 30006 | tee A7.json | jq '.acList[] | …'; done
# saves the result of `jq` and outputs it
while true; do sleep 1; nc 127.0.0.1 30006 | jq '.acList[] | …' | tee A7.json; done
Can you try this ?
nc localhost 30006 | tee -a A7.json |
while true; do
stdbuf -o 0 jq 'try (.acList[] | select(.Call | contains("QTR")))' 2>/dev/null
done
Assuming that no other process is competing for the port, I'd suggest trying:
nc -k -l localhost 30006 | jq --unbuffered ....
Or if you want to keep a copy of the output of the netcat command:
nc -k -l localhost 30006 | tee A7.json | jq --unbuffered ....
You might want to use tee -a A7.json instead.
Break Down
Why I did what I did?
I have live raw json stream data from the virtual radar server, which is running on my laptop along with Kali Linux WSL on the background.
For those who don't know virtual radar server is a Modes-s transmission decoder that is used to decode different ADS-B Formats. Also, it rebroadcast the data in a variety of formats.. one of them is the Json stream. And I want to save select aircraft data in json format on Kali Linux.
I used the following commands to save the data before
$ nc 127.0.0.1 30001 | jq > A7.json - To save the stream.
$ cat A7.json | jq '.acList[] | select(.Call | contains("QTR"))' - To fetch Selected airline
But I realized two things after using the above. One I'm storing unwanted data which is consuming my storage. Two, when I used the second command it just goes through the json file once and produces the data which is saved at that moment and that moment alone. which caused me problems as I have to execute the command over and over again to extract the filter data which is causing me to generate duplicate data.
Command worked for me
The following command worked flawlessly for my problem.
$ nc localhost 30001 | sudo jq --unbuffered '.acList[] | select (.Icao | contains("800CB8"))' > A7.json
The following also caused me some trouble which i explain clearly down below.
Errors X Explanation
This error was resulted in the missing the Field name & key in the json object.
$ nc localhost 30001 | sudo jq --unbuffered '.acList[] | select (.Call | contains("IAD"))' > A7.json
#OUTPUT
jq: error (at <stdin>:0): null (null) and string ("IAD") cannot have their containment checked
If you see the below JSON data you'll see the missing Field name & key which caused the error message above.
{
"Icao": "800CB8",
"Alt": 3950,
"GAlt": 3794,
"InHg": 29.7637787,
"AltT": 0,
"Call": "IAD766",
"Lat": 17.608658,
"Long": 83.239166,
"Mlat": false,
"Tisb": false,
"Spd": 209,
"Trak": 88.9,
"TrkH": false,
"Sqk": "",
"Vsi": -1280,
"VsiT": 0,
"SpdTyp": 0,
"CallSus": false,
"Trt": 2
}
{
"Icao": "800CB8",
"Alt": 3950,
"GAlt": 3794,
"AltT": 0,
"Lat": 17.608658,
"Long": 83.239166,
"Mlat": false,
"Spd": 209,
"Trak": 88.9,
"Vsi": -1280
}
{
"Icao": "800CB8",
"Alt": 3800,
"GAlt": 3644,
"AltT": 0,
"Lat": 17.608795,
"Long": 83.246155,
"Mlat": false,
"Spd": 209,
"Trak": 89.2,
"Vsi": -1216
}
Commands that didn't work for me.
Command #1
When i used jq with --stream with the filter it produced the below output. --Stream with output filter worked without any errors.
$ nc localhost 30001 | sudo jq --stream '.acList[] | select (.Icao | contains("800"))' > A7.json
#OUTPUT
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
jq: error (at <stdin>:0): Cannot index array with string "acList"
Command #2
For some reason -k -l didn't work to listen to the data. but the other command worked perfectly. i think it didn't work because of the existing port outside of the wsl.
$ nc -k -l localhost 30001
$ nc localhost 30001
Thank you to everyone who helped me to solve my issue. I'm very grateful to you guys
So my command is:
tshark -Y 'wlan.fc.type_subtype==0x04'
So my output is:
21401 205.735966 Apple_90:ea:8e -> Broadcast 802.11 155 Probe Request, SN=3667, FN=0, Flags=........C, SSID=Broadcast
How can I get Apple_90:ea:8e + SSID=Broadcast and whats the logic behind the grep? Is it possible with grep?
Considering that: Apple_90:ea:8e and Broadcast will always change!
$ var='21401 205.735966 Apple_90:ea:8e -> Broadcast 802.11 155 Probe Request, SN=3667, FN=0, Flags=........C, SSID=Broadcast'
$ grep -oP '\S+(?= ->)|SSID=\S+' <<< "$var"
Apple_90:ea:8e
SSID=Broadcast
The grep option -o says "only return what was matched, not the whole line" and -P is to use the Perl regex engine (because we use look-arounds). The regex is
\S+ # One or more non-spaces
(?= ->) # followed by " ->"
| # or...
SSID=\S+ # "SSID=" and one or more non-spaces
I have such textfile:
313 "88.68.245.12"
189 "87.245.108.11"
173 "84.134.230.11"
171 "87.143.88.4"
158 "77.64.132.10"
....
I want to grep only the IP from the first 10 lines, run whois over the IP adress and from that output I want to grep the line where it says netname.
How can I achieve this?
Just loop through the file with while - read:
while IFS='"' read -r a ip c
do
echo "ip: $ip"
whois "$ip" | grep netname
done < <(head -10 file)
This is giving IFS='"' so that the field separator is a double quote ". This way, the values within double quotes will be stored in $ip.
Then, we print the ip and perform the whois | grep thing.
Finally, we feed the loop with head -10 file, so that we just read the first 10 lines.
I want to ping a bunch of locations but not at the same time, in order so they don't timeout.
The input is for example: ping google.com -n 10 | grep Minimum >> output.txt
This will make the output of: Minimum = 29ms, Maximum = 46ms, Average = 33ms
But there are extra spaces in front of it which I don't know how to cut off, and when it outputs to the txt file it doesn't go to a new line. What I am trying to do is make it so I can copy and paste the input and ping a bunch of places once the previous finishes and log it in a .txt file and number them so it would look like:
Server 1: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 2: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 3: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 4: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Well, first of all, ping on linux limits packet number to send with -c, not -n.
Secondly, output of ping is not Minimum = xx ms, Maximum = yy ms, Avrage = zz ms, but rtt min/avg/max/mdev = 5.953/5.970/5.987/0.017 ms
So basically if you do something in lines of:
for server in google.com yahoo.com
do
rtt=`ping $server -c 2 | grep rtt`
echo "$server: $rtt" >> output.txt
done
You should achieve what you want.
[edit]
If cygwin is your platform, the easiest way to strip the spaces would be either what people are suggesting, sed, or then just | awk '{print $1}', will trim your line as well.
I think you might be able to solve this using sed two times and a while loop at the end:
N=1; ping google.com -n 10 | grep Minimum | sed -r 's/(Average = [[:digit:]]+ms)/\1\n/g' | sed -r s'/[[:space:]]+(Minimum)/\1/g' | while read file; do echo Server "$N": "$file"; N=$((N+1)); done >> output.txt
The steps:
The first sed fixes the newline issue:
Match the final part of the string after which you want a new line, in this case Average = [[:digit:]]+ms and put it into a group using the parenthesis
Then replace it with the same group (\1) and insert a newline character (\n) after it
The second sed removes the whitespaces, by matching the word Minimum and all whitespaces in front of it after which it only returns the word Minimum
The final while statement loops over each line and adds Server "$N": in front of the ping results. The $N was initialized to 1 at the start, and is increased with 1 after each read line
You can use sed to remove first 4 spaces :
ping google.com -n 10 | grep Minimum | sed s/^\ \ \ \ //
When pinging a host I want my output just to show the percentage of packets (5 sent) received. I assume I need to use grep somehow but I can't figure out how (I'm new to bash programming). Here is where I am: ping -c 5 -q $host | grep ?. What should go in grep? I think I will have to do some arithmetic to get the percent received but I can deal with that. How can I pull out the info I need from the summary that ping will output?
So far we've got an answer using grep, sed, perl, bc, and bash. Here is one in the flavor of AWK, "an interpreted programming language designed for text processing". This approach is designed for watching/capturing real-time packet loss information using ping.
To see only packet loss information:
Command
$ ping google.com | awk '{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; } { if (sent>0) loss=100-((received/sent)*100) } { printf "sent:%d received:%d loss:%d%%\n", sent, received, loss }'
Output
sent:0 received:0 loss:0%
sent:1 received:1 loss:0%
sent:2 received:2 loss:0%
sent:3 received:2 loss:33%
sent:4 received:2 loss:50%
sent:5 received:3 loss:40%
^C
However, I find it useful to see the original input as well. For this you just add print $0; to the last block in the script:
Command
$ ping google.com | awk '{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; } { if (sent>0) loss=100-((received/sent)*100) } { print $0; printf "sent:%d received:%d loss:%d%%\n", sent, received, loss; }'
Output
PING google.com (173.194.33.104): 56 data bytes
sent:0 received:0 loss:0%
64 bytes from 173.194.33.46: icmp_seq=0 ttl=55 time=18.314 ms
sent:1 received:1 loss:0%
64 bytes from 173.194.33.46: icmp_seq=1 ttl=55 time=31.477 ms
sent:2 received:2 loss:0%
Request timeout for icmp_seq 2
sent:3 received:2 loss:33%
Request timeout for icmp_seq 3
sent:4 received:2 loss:50%
64 bytes from 173.194.33.46: icmp_seq=4 ttl=55 time=20.397 ms
sent:5 received:3 loss:40%
^C
How does this all work?
You read the command, tried it, and it works! So what exactly is happening?
$ ping google.com | awk '...'
We start by pinging google.com and piping the output into awk, the interpreter. Everything in single quotes defines the logic of our script.
Here it is in a whitespace friendly format:
# Gather Data
{
sent=NR-1;
received+=/^.*(time=.+ ms).*$/;
loss=0;
}
# Calculate Loss
{
if (sent>0) loss=100-((received/sent)*100)
}
# Output
{
print $0; # remove this line if you don't want the original input displayed
printf "sent:%d received:%d loss:%d%%\n", sent, received, loss;
}
We can break it down into three components:
{ gather data } { calculate loss } { output }
Each time ping outputs information, the AWK script will consume it and run this logic against it.
Gather Data
{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; }
This one has three actions; defining the sent, received, and loss variables.
sent=NR-1;
NR is an AWK variable for the current number of records. In AWK, a record corresponds to a line. In our case, a single line of output from ping. The first line of output from ping is a header and doesn't represent an actual ICMP request. So we create a variable, sent, and assign it the current line number minus one.
received+=/^.*(time=.+ ms).*$/;
Here we use a Regular Expresssion, ^.*(time=.+ ms).*$, to determine if the ICMP request was successful or not. Since every successful ping returns the length of time it took, we use that as our key.
For those that aren't great with regex patterns, this is what ours means:
^ starting at the beginning of the line
.* match anything until the next rule
(time=.+ ms) match "time=N ms", where N can be one or more of any character
.* match anything until the next rule
$ stop at the end of the line
When the pattern is matched, we increment the received variable.
Calculate Loss
{ if (sent>0) loss=100-((received/sent)*100) }
Now that we know how many ICMP requests were sent and received we can start doing the math to determine packet loss. To avoid a divide by zero error, we make sure a request has been sent before doing any calculations. The calculation itself is pretty simple:
received/sent = percentage of success in decimal format
*100 = convert from decimal to integer format
100- = invert the percentage from success to failure
Output
{ print $0; printf "sent:%d received:%d loss:%d%%\n", sent, received, loss; }
Finally we just need to print the relevant info.
I don't want to remember all of this
Instead of typing that out every time, or hunting down this answer, you can save the script to a file (e.g. packet_loss.awk). Then all you need to type is:
$ ping google.com | awk -f packet_loss.awk
As always, there are many different ways to do this., but here's one option:
This expression will capture the percent digits from "X% packet loss"
ping -c 5 -q $host | grep -oP '\d+(?=% packet loss)'
You can then subtract the "loss" percentage from 100 to get the "success" percentage:
packet_loss=$(ping -c 5 -q $host | grep -oP '\d+(?=% packet loss)')
echo $[100 - $packet_loss]
Assuming your ping results look like:
PING host.example (192.168.0.10) 56(84) bytes of data.
--- host.example ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.209/0.217/0.231/0.018 ms
Piping your ping -c 5 -q through:
grep -E -o '[0-9]+ received' | cut -f1 -d' '
Yields:
5
And then you can perform your arithmetic.
echo $((100-$(ping -c 5 -q www.google.hu | sed -rn "/packet loss/ s#.*([0-9]+)%.*#\1#p")))
Try a script:
/bin/bash
rec=ping -c $1 -q $2 | grep -c "$2" | sed -r 's_$_ / \$1_' | xargs expr
Save it, and run it with two command line args. The first is number of packets, the second is the host.
Does this work for you?
bc -l <<<100-$(ping -c 5 -q $host |
grep -o '[0-9]% packet loss' |
cut -f1 -d% )
It takes the percentage reported by ping and subtracts it from 100 to get the percentage of received packets.