Tshark custom grep - linux

So my command is:
tshark -Y 'wlan.fc.type_subtype==0x04'
So my output is:
21401 205.735966 Apple_90:ea:8e -> Broadcast 802.11 155 Probe Request, SN=3667, FN=0, Flags=........C, SSID=Broadcast
How can I get Apple_90:ea:8e + SSID=Broadcast and whats the logic behind the grep? Is it possible with grep?
Considering that: Apple_90:ea:8e and Broadcast will always change!

$ var='21401 205.735966 Apple_90:ea:8e -> Broadcast 802.11 155 Probe Request, SN=3667, FN=0, Flags=........C, SSID=Broadcast'
$ grep -oP '\S+(?= ->)|SSID=\S+' <<< "$var"
Apple_90:ea:8e
SSID=Broadcast
The grep option -o says "only return what was matched, not the whole line" and -P is to use the Perl regex engine (because we use look-arounds). The regex is
\S+ # One or more non-spaces
(?= ->) # followed by " ->"
| # or...
SSID=\S+ # "SSID=" and one or more non-spaces

Related

how to loop through string for patterns from linux shell?

I have a script that looks through files in a directory for strings like :tagName: which works fine for single :tag: but not for multiple :tagOne:tagTwo:tagThree: tags.
My current script does:
grep -rh -e '^:\S*:$' ~/Documents/wiki/*.mkd ~/Documents/wiki/diary/*.mkd | \
sed -r 's|.*(:[Aa-Zz]*:)|\1|g' | \
sort -u
printf '\nNote: this fails to display combined :tagOne:tagTwo:etcTag:\n'
The first line is generating an output like this:
:politics:violence:
:positivity:
:positivity:somewhat:
:psychology:
:socialServices:family:
:strategy:
:tech:
:therapy:babylon:
:trauma:
:triggered:
:truama:leadership:business:toxicity:
:unfurling:
:tagOne:tagTwo:etcTag:
And the objective is to get that into a list of single :tag:'s.
Again, the problem is that if a line has multiple tags, the line does not appear in the output at all (as opposed to the problem merely being that only the first tag of the line gets displayed). Obviously the | sed... | there is problematic.
**I want :tagOne:tagTwo:etcTag: to be turned this into:
:tagOne:
:tagTwo:
:etcTag:
and so forth with :politics:violence: etc.
Colons aren't necessary, tagOne is just as good (maybe better, but this is trivial) than :tagOne:.
The problem is that if a line has multiple tags, the line does not appear in the output at all (as opposed to the problem merely being that only the first tag of the line gets displayed). Obviously the | sed... | there is problematic.
So I should replace the sed with something better...
I've tried:
A smarter sed:
grep -rh -e '^:\S*:$' ~/Documents/wiki/*.mkd ~/Documents/wiki/diary/*.mkd | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sort -u
...which works (for a limited number of tags) except that it produces weird results like:
:toxicity:p:
:somewhat:y:
:people:n:
...placing weird random letters at the end of some tags in which :p: is the final character of the :leadership: tag and "leadership" no longer appears in the list. Same for :y: and :n:.
I've also tried using loops in a couple ways...
grep -rh -e '^:\S*:$' ~/Documents/wiki/*.mkd ~/Documents/wiki/diary/*.mkd | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sed -r 's|(:[Aa-Zz]*:)([Aa-Zz]*:)|\1\r:\2|g' | \
sort -u | grep lead
...which has the same problem of :leadership: tags being lost etc.
And like...
for m in $(grep -rh -e '^:\S*:$' ~/Documents/wiki/*.mkd ~/Documents/wiki/diary/*.mkd); do
for t in $(echo $m | grep -e ':[Aa-Zz]*:'); do
printf "$t\n";
done
done | sort -u
...which doesn't separate the tags at all, just prints stuff like:
:truama:leadership:business:toxicity
Should I be taking some other approach? Using a different utility (perhaps cut inside a loop)? Maybe doing this in python (I have a few python scripts but don't know the language well, but maybe this would be easy to do that way)? Every time I see awk I think "EEK!" so I'd prefer a non-awk solution please, preferring to stick to paradigms I've used in order to learn them better.
Using PCRE in grep (where available) and positive lookbehind:
$ echo :tagOne:tagTwo:tagThree: | grep -Po "(?<=:)[^:]+:"
tagOne:
tagTwo:
tagThree:
You will lose the leading : but get the tags nevertheless.
Edit: Did someone mention awk?:
$ awk '{
while(match($0,/:[^:]+:/)) {
a[substr($0,RSTART,RLENGTH)]
$0=substr($0,RSTART+1)
}
}
END {
for(i in a)
print i
}' file
Another idea using awk ...
Sample data generated by OPs initial grep:
$ cat tags.raw
:politics:violence:
:positivity:
:positivity:somewhat:
:psychology:
:socialServices:family:
:strategy:
:tech:
:therapy:babylon:
:trauma:
:triggered:
:truama:leadership:business:toxicity:
:unfurling:
:tagOne:tagTwo:etcTag:
One awk idea:
awk '
{ split($0,tmp,":") # split input on colon;
# NOTE: fields #1 and #NF are the empty string - see END block
for ( x in tmp ) # loop through tmp[] indices
{ arr[tmp[x]] } # store tmp[] values as arr[] indices; this eliminates duplicates
}
END { delete arr[""] # remove the empty string from arr[]
for ( i in arr ) # loop through arr[] indices
{ printf ":%s:\n", i } # print each tag on separate line leading/trailing colons
}
' tags.raw | sort # sort final output
NOTE: I'm not up to speed on awk's ability to internally sort arrays (thus eliminating the external sort call) so open to suggestions (or someone can copy this answer to a new one and update with said ability?)
The above also generates:
:babylon:
:business:
:etcTag:
:family:
:leadership:
:politics:
:positivity:
:psychology:
:socialServices:
:somewhat:
:strategy:
:tagOne:
:tagTwo:
:tech:
:therapy:
:toxicity:
:trauma:
:triggered:
:truama:
:unfurling:
:violence:
A pipe through tr can split those strings out to separate lines:
grep -hx -- ':[:[:alnum:]]*:' ~/Documents/wiki{,/diary}/*.mkd | tr -s ':' '\n'
This will also remove the colons and an empty line will be present in the output (easy to repair, note the empty line will always be the first one due to the leading :). Add sort -u to sort and remove duplicates, or awk '!seen[$0]++' to remove duplicates without sorting.
An approach with sed:
sed '/^:/!d;s///;/:$/!d;s///;y/:/\n/' ~/Documents/wiki{,/diary}/*.mkd
This also removes colons, but avoids adding empty lines (by removing the leading/trailing : with s before using y to transliterate remaining : to <newline>). sed could be combined with tr:
sed '/:$/!d;/^:/!d;s///' ~/Documents/wiki{,/diary}/*.mkd | tr -s ':' '\n'
Using awk to work with the : separated fields, removing duplicates:
awk -F: '/^:/ && /:$/ {for (i=2; i<NF; ++i) if (!seen[$i]++) print $i}' \
~/Documents/wiki{,/diary}/*.mkd
Sample data generated by OPs initial grep:
$ cat tags.raw
:politics:violence:
:positivity:
:positivity:somewhat:
:psychology:
:socialServices:family:
:strategy:
:tech:
:therapy:babylon:
:trauma:
:triggered:
:truama:leadership:business:toxicity:
:unfurling:
:tagOne:tagTwo:etcTag:
One while/for/printf idea based on associative arrays:
unset arr
typeset -A arr # declare array named 'arr' as associative
while read -r line # for each line from tags.raw ...
do
for word in ${line//:/ } # replace ":" with space and process each 'word' separately
do
arr[${word}]=1 # create/overwrite arr[$word] with value 1;
# objective is to make sure we have a single entry in arr[] for $word;
# this eliminates duplicates
done
done < tags.raw
printf ":%s:\n" "${!arr[#]}" | sort # pass array indices (ie, our unique list of words) to printf;
# per OPs desired output we'll bracket each word with a pair of ':';
# then sort
Per OPs comment/question about removing the array, a twist on the above where we eliminate the array in favor of printing from the internal loop and then piping everything to sort -u:
while read -r line # for each line from tags.raw ...
do
for word in ${line//:/ } # replace ":" with space and process each 'word' separately
do
printf ":%s:\n" "${word}" # print ${word} to stdout
done
done < tags.raw | sort -u # pipe all output (ie, list of ${word}s for sorting and removing dups
Both of the above generates:
:babylon:
:business:
:etcTag:
:family:
:leadership:
:politics:
:positivity:
:psychology:
:socialServices:
:somewhat:
:strategy:
:tagOne:
:tagTwo:
:tech:
:therapy:
:toxicity:
:trauma:
:triggered:
:truama:
:unfurling:
:violence:

Count total number of pattern between two pattern (using sed if possible) in Linux

I have to count all '=' between two pattern i.e '{' and '}'
Sample:
{
100="1";
101="2";
102="3";
};
{
104="1,2,3";
};
{
105="1,2,3";
};
Expected Output:
3
1
1
A very cryptic perl answer:
perl -nE 's/\{(.*?)\}/ say ($1 =~ tr{=}{=}) /ge'
The tr function returns the number of characters transliterated.
With the new requirements, we can make a couple of small changes:
perl -0777 -nE 's/\{(.*?)\}/ say ($1 =~ tr{=}{=}) /ges'
-0777 reads the entire file/stream into a single string
the s flag to the s/// function allows . to handle newlines like a plain character.
Perl to the rescue:
perl -lne '$c = 0; $c += ("$1" =~ tr/=//) while /\{(.*?)\}/g; print $c' < input
-n reads the input line by line
-l adds a newline to each print
/\{(.*?)\}/g is a regular expression. The ? makes the asterisk frugal, i.e. matching the shortest possible string.
The (...) parentheses create a capture group, refered to as $1.
tr is normally used to transliterate (i.e. replace one character by another), but here it just counts the number of equal signs.
+= adds the number to $c.
Awk is here too
grep -o '{[^}]\+}'|awk -v FS='=' '{print NF-1}'
example
echo '{100="1";101="2";102="3";};
{104="1,2,3";};
{105="1,2,3";};'|grep -o '{[^}]\+}'|awk -v FS='=' '{print NF-1}'
output
3
1
1
First some test input (a line with a = outside the curly brackets and inside the content, one without brackets and one with only 2 brackets)
echo '== {100="1";101="2";102="3=3=3=3";} =;
a=b
{c=d}
{}'
Handle line without brackets (put a dummy char so you will not end up with an empty string)
sed -e 's/^[^{]*$/x/'
Handle line without equal sign (put a dummy char so you will not end up with an empty string)
sed -e 's/{[^=]*}/x/'
Remove stuff outside the brackets
sed -e 's/.*{\(.*\)}/\1/'
Remove stuff inside the double quotes (do not count fields there)
sed -e 's/"[^"]*"//g'
Use #repzero method to count equal signs
awk -F "=" '{print NF-1}'
Combine stuff
echo -e '{100="1";101="2";102="3";};\na=b\n{c=d}\n{}' |
sed -e 's/^[^{]*$/x/' -e 's/{[^=]*}/x/' -e 's/.*{\(.*\)}/\1/' -e 's/"[^"]*"//g' |
awk -F "=" '{print NF-1}'
The ugly temp fields x and replacing {} can be solved inside awk:
echo -e '= {100="1";101="2=2=2=2";102="3";};\na=b\n{c=d}\n{}' |
sed -e 's/^[^{]*$//' -e 's/.*{\(.*\)}/\1/' -e 's/"[^"]*"//g' |
awk -F "=" '{if (NF>0) c=NF-1; else c=0; print c}'
or shorter
echo -e '= {100="1";101="2=2=2=2";102="3";};\na=b\n{c=d}\n{}' |
sed -e 's/^[^{]*$//' -e 's/.*{\(.*\)}/\1/' -e 's/"[^"]*"//g' |
awk -F "=" '{print (NF>0) ? NF-1 : 0; }'
No harder sed than done ... in.
Restricting this answer to the environment as tagged, namely:
linux shell unix sed wc
will actually not require the use of wc (or awk, perl, or any other app.).
Though echo is used, a file source can easily exclude its use.
As for bash, it is the shell.
The actual environment used is documented at the end.
NB. Exploitation of GNU specific extensions has been used for brevity
but appropriately annotated to make a more generic implementation.
Also brace bracketed { text } will not include braces in the text.
It is implicit that such braces should be present as {} pairs but
the text src. dangling brace does not directly violate this tenet.
This is a foray into the world of `sed`'ng to gain some fluency in it's use for other purposes.
The ideas expounded upon here are used to cross pollinate another SO problem solution in order
to aquire more familiarity with vetting vagaries of vernacular version variances. Consequently
this pedantic exercice hopefully helps with the pedagogy of others beyond personal edification.
To test easily, at least in the environment noted below, judiciously highlight the appropriate
code section, carefully excluding a dangling pipe |, and then, to a CLI command line interface
drag & drop, copy & paste or use middle click to enter the code.
The other SO problem. linux - Is it possible to do simple arithmetic in sed addresses?
# _______________________________ always needed ________________________________
echo -e '\n
\n = = = {\n } = = = each = is outside the braces
\na\nb\n { } so therefore are not counted
\nc\n { = = = = = = = } while the ones here do count
{\n100="1";\n101="2";\n102="3";\n};
\n {\n104="1,2,3";\n};
a\nb\nc\n {\n105="1,2,3";\n};
{ dangling brace ignored junk = = = \n' |
# _____________ prepatory conditioning needed for final solutions _____________
sed ' s/{/\n{\n/g;
s/}/\n}\n/g; ' | # guarantee but one brace to a line
sed -n '/{/ h; # so sed addressing can "work" here
/{/,/}/ H; # use hHold buffer for only { ... }
/}/ { x; s/[^=]*//g; p } ' | # then make each {} set a line of =
# ____ stop code hi-lite selection in ^--^ here include quote not pipe ____
# ____ outputs the following exclusive of the shell " # " comment quotes _____
#
#
# =======
# ===
# =
# =
# _________________________________________________________________________
# ____________________________ "simple" GNU solution ____________________________
sed -e '/^$/ { s//0/;b }; # handle null data as 0 case: next!
s/=/\n/g; # to easily count an = make it a nl
s/\n$//g; # echo adds an extra nl - delete it
s/.*/echo "&" | sed -n $=/; # sed = command w/ $ counts last nl
e ' # who knew only GNU say you ah phoo
# 0
# 0
# 7
# 3
# 1
# 1
# _________________________________________________________________________
# ________________________ generic incomplete "solution" ________________________
sed -e '/^$/ { s//echo 0/;b }; # handle null data as 0 case: next!
s/=$//g; # echo adds an extra nl - delete it
s/=/\\\\n/g; # to easily count an = make it a nl
s/.*/echo -e & | sed -n $=/; '
# _______________________________________________________________________________
The paradigm used for the algorithm is instigated by the prolegomena study below.
The idea is to isolate groups of = signs between { } braces for counting.
These are found and each group is put on a separate line with ALL other adorning characters removed.
It is noted that sed can easily "count", actually enumerate, nl or \n line ends via =.
The first "solution" uses these sed commands:
print
branch w/o label starts a new cycle
h/Hold for filling this sed buffer
exchanage to swap the hold and pattern buffers
= to enumerate the current sed input line
substitute s/.../.../; with global flag s/.../.../g;
and most particularly the GNU specific
evaluate (execute can not remember the actual mnemonic but irrelevantly synonymous)
The GNU specific execute command is avoided in the generic code. It does not print the answer but
instead produces code that will print the answer. Run it to observe. To fully automate this, many
mechanisms can be used not the least of which is the sed write command to put these lines in a
shell file to be excuted or even embed the output in bash evaluation parentheses $( ) etc.
Note also that various sed example scripts can "count" and these too can be used efficaciously.
The interested reader can entertain these other pursuits.
prolegomena:
concept from counting # of lines between braces
sed -n '/{/=;/}/=;'
to
sed -n '/}/=;/{/=;' |
sed -n 'h;n;G;s/\n/ - /;
2s/^/ Between sets of {} \n the nl # count is\n /;
2!s/^/ /;
p'
testing "done in":
linuxuser#ubuntu:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
linuxuser#ubuntu:~$ sed --version -----> sed (GNU sed) 4.4
And for giggles an awk-only alternative:
echo '{
> 100="1";
> 101="2";
> 102="3";
> };
> {
> 104="1,2,3";
> };
> {
> 105="1,2,3";
> };' | awk 'BEGIN{RS="\n};";FS="\n"}{c=gsub(/=/,""); if(NF>2){print c}}'
3
1
1

ping script and log output and cut with grep

I want to ping a bunch of locations but not at the same time, in order so they don't timeout.
The input is for example: ping google.com -n 10 | grep Minimum >> output.txt
This will make the output of: Minimum = 29ms, Maximum = 46ms, Average = 33ms
But there are extra spaces in front of it which I don't know how to cut off, and when it outputs to the txt file it doesn't go to a new line. What I am trying to do is make it so I can copy and paste the input and ping a bunch of places once the previous finishes and log it in a .txt file and number them so it would look like:
Server 1: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 2: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 3: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Server 4: Minimum = 29ms, Maximum = 46ms, Average = 33ms
Well, first of all, ping on linux limits packet number to send with -c, not -n.
Secondly, output of ping is not Minimum = xx ms, Maximum = yy ms, Avrage = zz ms, but rtt min/avg/max/mdev = 5.953/5.970/5.987/0.017 ms
So basically if you do something in lines of:
for server in google.com yahoo.com
do
rtt=`ping $server -c 2 | grep rtt`
echo "$server: $rtt" >> output.txt
done
You should achieve what you want.
[edit]
If cygwin is your platform, the easiest way to strip the spaces would be either what people are suggesting, sed, or then just | awk '{print $1}', will trim your line as well.
I think you might be able to solve this using sed two times and a while loop at the end:
N=1; ping google.com -n 10 | grep Minimum | sed -r 's/(Average = [[:digit:]]+ms)/\1\n/g' | sed -r s'/[[:space:]]+(Minimum)/\1/g' | while read file; do echo Server "$N": "$file"; N=$((N+1)); done >> output.txt
The steps:
The first sed fixes the newline issue:
Match the final part of the string after which you want a new line, in this case Average = [[:digit:]]+ms and put it into a group using the parenthesis
Then replace it with the same group (\1) and insert a newline character (\n) after it
The second sed removes the whitespaces, by matching the word Minimum and all whitespaces in front of it after which it only returns the word Minimum
The final while statement loops over each line and adds Server "$N": in front of the ping results. The $N was initialized to 1 at the start, and is increased with 1 after each read line
You can use sed to remove first 4 spaces :
ping google.com -n 10 | grep Minimum | sed s/^\ \ \ \ //

How to get the percent of packets received from Ping in bash?

When pinging a host I want my output just to show the percentage of packets (5 sent) received. I assume I need to use grep somehow but I can't figure out how (I'm new to bash programming). Here is where I am: ping -c 5 -q $host | grep ?. What should go in grep? I think I will have to do some arithmetic to get the percent received but I can deal with that. How can I pull out the info I need from the summary that ping will output?
So far we've got an answer using grep, sed, perl, bc, and bash. Here is one in the flavor of AWK, "an interpreted programming language designed for text processing". This approach is designed for watching/capturing real-time packet loss information using ping.
To see only packet loss information:
Command
$ ping google.com | awk '{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; } { if (sent>0) loss=100-((received/sent)*100) } { printf "sent:%d received:%d loss:%d%%\n", sent, received, loss }'
Output
sent:0 received:0 loss:0%
sent:1 received:1 loss:0%
sent:2 received:2 loss:0%
sent:3 received:2 loss:33%
sent:4 received:2 loss:50%
sent:5 received:3 loss:40%
^C
However, I find it useful to see the original input as well. For this you just add print $0; to the last block in the script:
Command
$ ping google.com | awk '{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; } { if (sent>0) loss=100-((received/sent)*100) } { print $0; printf "sent:%d received:%d loss:%d%%\n", sent, received, loss; }'
Output
PING google.com (173.194.33.104): 56 data bytes
sent:0 received:0 loss:0%
64 bytes from 173.194.33.46: icmp_seq=0 ttl=55 time=18.314 ms
sent:1 received:1 loss:0%
64 bytes from 173.194.33.46: icmp_seq=1 ttl=55 time=31.477 ms
sent:2 received:2 loss:0%
Request timeout for icmp_seq 2
sent:3 received:2 loss:33%
Request timeout for icmp_seq 3
sent:4 received:2 loss:50%
64 bytes from 173.194.33.46: icmp_seq=4 ttl=55 time=20.397 ms
sent:5 received:3 loss:40%
^C
How does this all work?
You read the command, tried it, and it works! So what exactly is happening?
$ ping google.com | awk '...'
We start by pinging google.com and piping the output into awk, the interpreter. Everything in single quotes defines the logic of our script.
Here it is in a whitespace friendly format:
# Gather Data
{
sent=NR-1;
received+=/^.*(time=.+ ms).*$/;
loss=0;
}
# Calculate Loss
{
if (sent>0) loss=100-((received/sent)*100)
}
# Output
{
print $0; # remove this line if you don't want the original input displayed
printf "sent:%d received:%d loss:%d%%\n", sent, received, loss;
}
We can break it down into three components:
{ gather data } { calculate loss } { output }
Each time ping outputs information, the AWK script will consume it and run this logic against it.
Gather Data
{ sent=NR-1; received+=/^.*(time=.+ ms).*$/; loss=0; }
This one has three actions; defining the sent, received, and loss variables.
sent=NR-1;
NR is an AWK variable for the current number of records. In AWK, a record corresponds to a line. In our case, a single line of output from ping. The first line of output from ping is a header and doesn't represent an actual ICMP request. So we create a variable, sent, and assign it the current line number minus one.
received+=/^.*(time=.+ ms).*$/;
Here we use a Regular Expresssion, ^.*(time=.+ ms).*$, to determine if the ICMP request was successful or not. Since every successful ping returns the length of time it took, we use that as our key.
For those that aren't great with regex patterns, this is what ours means:
^ starting at the beginning of the line
.* match anything until the next rule
(time=.+ ms) match "time=N ms", where N can be one or more of any character
.* match anything until the next rule
$ stop at the end of the line
When the pattern is matched, we increment the received variable.
Calculate Loss
{ if (sent>0) loss=100-((received/sent)*100) }
Now that we know how many ICMP requests were sent and received we can start doing the math to determine packet loss. To avoid a divide by zero error, we make sure a request has been sent before doing any calculations. The calculation itself is pretty simple:
received/sent = percentage of success in decimal format
*100 = convert from decimal to integer format
100- = invert the percentage from success to failure
Output
{ print $0; printf "sent:%d received:%d loss:%d%%\n", sent, received, loss; }
Finally we just need to print the relevant info.
I don't want to remember all of this
Instead of typing that out every time, or hunting down this answer, you can save the script to a file (e.g. packet_loss.awk). Then all you need to type is:
$ ping google.com | awk -f packet_loss.awk
As always, there are many different ways to do this., but here's one option:
This expression will capture the percent digits from "X% packet loss"
ping -c 5 -q $host | grep -oP '\d+(?=% packet loss)'
You can then subtract the "loss" percentage from 100 to get the "success" percentage:
packet_loss=$(ping -c 5 -q $host | grep -oP '\d+(?=% packet loss)')
echo $[100 - $packet_loss]
Assuming your ping results look like:
PING host.example (192.168.0.10) 56(84) bytes of data.
--- host.example ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.209/0.217/0.231/0.018 ms
Piping your ping -c 5 -q through:
grep -E -o '[0-9]+ received' | cut -f1 -d' '
Yields:
5
And then you can perform your arithmetic.
echo $((100-$(ping -c 5 -q www.google.hu | sed -rn "/packet loss/ s#.*([0-9]+)%.*#\1#p")))
Try a script:
/bin/bash
rec=ping -c $1 -q $2 | grep -c "$2" | sed -r 's_$_ / \$1_' | xargs expr
Save it, and run it with two command line args. The first is number of packets, the second is the host.
Does this work for you?
bc -l <<<100-$(ping -c 5 -q $host |
grep -o '[0-9]% packet loss' |
cut -f1 -d% )
It takes the percentage reported by ping and subtracts it from 100 to get the percentage of received packets.

Filtering Linux command output

I need to get a row based on column value just like querying a database. I have a command output like this,
Name ID Mem VCPUs State
Time(s)
Domain-0 0 15485 16 r-----
1779042.1
prime95-01 512 1
-b---- 61.9
Here I need to list only those rows where state is "r". Something like this,
Domain-0 0 15485 16
r----- 1779042.1
I have tried using "grep" and "awk" but still I am not able to succeed.
Any help me is much appreciated
Regards,
Raaj
There is a variaty of tools available for filtering.
If you only want lines with "r-----" grep is more than enough:
command | grep "r-----"
Or
cat filename | grep "r-----"
grep can handle this for you:
yourcommand | grep -- 'r-----'
It's often useful to save the (full) output to a file to analyse later. For this I use tee.
yourcommand | tee somefile | grep 'r-----'
If you want to find the line containing "-b----" a little later on without re-running yourcommand, you can just use:
grep -- '-b----' somefile
No need for cat here!
I recommend putting -- after your call to grep since your patterns contain minus-signs and if the minus-sign is at the beginning of the pattern, this would look like an option argument to grep rather than a part of the pattern.
try:
awk '$5 ~ /^r.*/ { print }'
Like this:
cat file | awk '$5 ~ /^r.*/ { print }'
grep solution:
command | grep -E "^([^ ]+ ){4}r"
What this does (-E switches on extended regexp):
The first caret (^) matches the beginning of the line.
[^ ] matches exactly one occurence of a non-space character, the following modifier (+) allows it to also match more occurences.
Grouped together with the trailing space in ([^ ]+ ), it matches any sequence of non-space characters followed by a single space. The modifyer {4} requires this construct to be matched exactly four times.
The single "r" is then the literal character you are searching for.
In plain words this could be written like "If the line starts <^> with four strings that are followed by a space <([^ ]+ ){4}> and the next character is , then the line matches."
A very good introduction into regular expressions has been written by Jan Goyvaerts (http://www.regular-expressions.info/quickstart.html).
Filtering by awk cmd in linux:-
Firstly find the column for this cmd and store file2 :-
awk '/Domain-0 0 15485 /' file1 >file2
Output:-
Domain-0 0 15485 16
r----- 1779042.1
after that awk cmd in file2:-
awk '{print $1,$2,$3,$4,"\n",$5,$6}' file2
Final Output:-
Domain-0 0 15485 16
r----- 1779042.1

Resources