How to extract a single word from the output of a Linux command? - linux

$ lsusb --verbose | grep "THRSL_C_C_V"
results in:
iManufacturer 1 THRSL_C_C_V3.07
I want to extract the word THRSL_C_C_V3.07 only.
I tried lsusb --verbose | grep -w "THRSL_C_C_V". Didn't show anything.

Try:
lsusb --verbose | grep -o "THRSL_C_C_V3\.07"
The -o options results in only matching text, the backlash escapes the dot allowing an exact match of the requested text.

You may use awk instead of grep:
lsusb --verbose | awk '/THRSL_C_C_V/{print $3}'
This awk command searches a line that has text THRSL_C_C_V in it and by using print $3 we make sure to print 3rd column of matched line.
Alternatively you can use grep -o like this:
echo 'iManufacturer 1 THRSL_C_C_V3.07' |
grep -oE 'THRSL_C_C_V[^[:blank:]]*'
THRSL_C_C_V3.07

Use the following:-
$ lsusb --verbose | grep "THRSL_C_C_V" | awk '{print $NF}'
or
$ lsusb --verbose | awk '/THRSL_C_C_V/{print $NF}'
Here, $NF will be used for the last element of output

Related

Display the lines of file that contain the pattern 'Pap' but they don't contain the pattern 'Aig'

How to make such a double query ?
I try grep -e Pap -e -v Aig filename.txt and grep 'Pap' && -v 'Aig' filename.txt
but both don't work
Use a pipe between your filters
grep -e Pap filename.txt | grep -v Aig

Sum out of grep -c

I am trying to find the number an even occured in my log file.
Command:
grep -Eo "2016-08-30" applciationLog.log* -c
Output:
applciationLog.log.1:0
applciationLog.log.2:0
applciationLog.log.3:0
applciationLog.log.4:0
applciationLog.log.5:7684
applciationLog.log.6:9142
applciationLog.log.7:8699
applciationLog.log.8:0
What I actually need is sum of all these values 7684 + 9142 + 8699 = 25525. Any suggestion I can do it? Anything I can append to the grep to enable it.
Any help or pointers are welcome and appreciated.
If you want to keep your grep command, pipe its output to awk, the quick and dirty way is down here:
grep -Eo "aaa" -c aaa.txt bbb.txt -c | awk 'BEGIN {cnt=0;FS=":"}; {cnt+=$2;}; END {print cnt;}'
Or use use awk regex directly:
awk 'BEGIN {cnt=0}; {if(/aaa/) {cnt+=1;}}; END {print cnt;}' aaa.txt bbb.txt
As addition to the already given answer by ghoti:
You can avoid awk -F: by using grep -h:
grep -c -h -F "2016-08-30" applicationLog.log* | awk '{n+=$0} END {print n}'
This means no filenames and only the counts are printed by grep and we can use the first field for the addition in awk.
See if this works for you:
grep -Eo "2016-08-30" applciationLog.log* -c | awk -F':' 'BEGIN {sum = 0;} {sum += $2;} END {print sum;}'
We use awk to split each line up with a delimeter of :, sum up the numbers for each line, and print the result at the end.
The grep command doesn't do arithmetic, it just finds lines that match regular expressions.
To count the output you already have, I'd use awk.
grep -c -F "2016-08-30" applciationLog.log* | awk -F: '{n+=$2} END {print n}'
Note that your grep options didn't make sense -- -E tells the command to use Extended regular expressions, but you're just looking for a fixed string (the date). So I swapped in the -F option instead. And -o tells grep to print the matched text, which you've overridden with -c, so I dropped it.
An alternative using for-loop and arithmetic expansion could be:
x=0
for i in $(grep -hc "2016-08-30" applciationLog.log*);do
x=$((x+i))
done
echo "$x"
An easy alternative is to merge all the files before grep sees them:
cat applciationLog.log* | grep -Eo "2016-08-30" -c
In my directory have have hundreds of files, each file contains lot of text along with a lines similar to this-
Job_1-Run.log:[08/27/20 01:28:40] Total Jobs Cancelled for Job_1_set0 = 10
I do
grep '^Total Jobs Cancelled' ./*
to get that above line.
Then I do a pipe
| awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;}; END {print cnt;}'
so my final command is-
grep '^Total Jobs Cancelled' ./* | awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;};END {print cnt;}'
and result is the sum. e.g. -
900
I am using Cmder # https://cmder.net/
Thanks to the answer by #alagner, #john above

Count lines of CLI output in linux

Hi have the following command:
lsscsi | grep HITACHI | awk '{print $6}'
I want that the output will be the number of lines of the original output.
For example, if the original output is:
/dev/sda
/dev/sdb
/dev/sdc
The final output will be 3.
Basically the command wc -l can be used to count the lines in a file or pipe. However, since you want to count the number of lines after a filter has been applied I would recommend to use grep for that:
lsscsi | grep -c 'HITACHI'
-c just prints the number of matching lines.
Another thing. In your example you are using grep .. | awk. That's a useless use of grep. It should be
lsscsi | awk '/HITACHI/{print $6}'

grep a particular content before a period

I am trying to read/grep a particular word or content that is before a period (.).
e.g. file1 has abinaya.ashok and I want to grep whatever is before the period (.) without hardcoding anything.
if I try
grep \.\ file1
it gives abinaya.ashok.
I've tried: grep\*\.\ file1
it doesn't give anything.Can we find it using grep commands or should we do it only using awk command? Any thoughts?
Using GNU grep for PCRE regex (for non-greedy and positive look-ahead), you can do:
echo 'abinaya.ashok' | grep -oP '.*?(?=\.)'
abinaya
Using awk:
echo 'abinaya.ashok' | awk -F\. '{print $1}'
abinaya
Check the following simple examples.
Including the dot:
$ echo abinaya.ashok | grep -o '.*[.]'
abinaya.
Without the dot:
$ echo abinaya.ashok | grep -o '^[^.]\+'
abinaya
Hope I understand you correctly:
sed -n 's/\..*//p' file1 | grep whatever
sed expression will print only part before dot (lines without dot are not printed).
Now use grep to search what you need.

How to extract version from a single command line in linux?

I have a product which has a command called db2level whose output is given below
I need to extract 8.1.1.64 out of it, so far i came up with,
db2level | grep "DB2 v" | awk '{print$5}'
which gave me an output v8.1.1.64",
Please help me to fetch 8.1.1.64. Thanks
grep is enough to do that:
db2level| grep -oP '(?<="DB2 v)[\d.]+(?=", )'
Just with awk:
db2level | awk -F '"' '$2 ~ /^DB2 v/ {print substr($2,6)}'
db2level | grep "DB2 v" | awk '{print$5}' | sed 's/[^0-9\.]//g'
remove all but numbers and dot
sed is your friend for general extraction tasks:
db2level | sed -n -e 's/.*tokens are "DB2 v\([0-9.]*\)".*/\1/p'
The sed line does print no lines (the -n) but those where a replacement with the given regexp can happen. The .* at the beginning and the end of the line ensure that the whole line is matched.
Try grep with -o option:
db2level | grep -E -o "[0-9]+\.[0-9]+\.[0-9]\+[0-9]+"
Another sed solution
db2level | sed -n -e '/v[0-9]/{s/.*DB2 v//;s/".*//;p}'
This one desn't rely on the number being in a particular format, just in a particular place in the output.
db2level | grep -o "v[0-9.]*" | tr -d v
Try s.th. like db2level | grep "DB2 v" | cut -d'"' -f2 | cut -d'v' -f2
cut splits the input in parts, seperated by delimiter -d and outputs field number -f

Resources