grep command not working as my expectation - linux

I have a text file like mentioned below, and along with that I will pass an input for which I want a corresponding output.
Input file: test.txt
abc:abc_1
abcd:abcd_1
1_abcd:1_abcd_bkp
xyz:xyz_2
so if I use abc with the above test.txt file, I want abc_1; and if I pass abcd, I need abcd_1 as output.
I tried cat text.txt | grep abc | cut -d":" -f2,2, but I am getting the output
abc_1
abcd_1
1_abcd_bkp
when I want only abc_1.

With GNU grep:
grep -Po "^abc:\K.*" file
Output:
abc_1
\K keeps the text matched so far out of the overall regex match.

You want to use a regular expression with the -e switch.
In particular, regular expressions allow you to use caret (^) to express the start of a line.
Since you only care about abc when it's at the start of a line and it's followed by :, you want:
cat test.txt | grep -e "^abc:" | cut -d":" -f2,2
Output:
abc_1

awk to the rescue!
awk -F: -v key="abc" '$1==key{print $2}'
using : as the delimiter do the look up for key on field 1 to return field 2.
Or, by moving the key in the script
awk -F: '$1=="abc"{print $2}'

you can try the exclude -v:
cat text.txt | grep abc | grep -vi abc[a-z]
not sure if that would work exactly, try something with that kind of idea

Without specifying second field to be printed the whole line will be or in other cases lines.
awk -F: '/abc_/{print $2}' file
abc_1
awk -F: 'NR==1,/abc/{print $2}' file
abc_1

Related

bash: awk print with in print

I need to grep some pattern and further i need to print some output within that. Currently I am using the below command which is working fine. But I like to eliminate using multiple pipe and want to use single awk command to achieve the same output. Is there a way to do it using awk?
root#Server1 # cat file
Jenny:Mon,Tue,Wed:Morning
David:Thu,Fri,Sat:Evening
root#Server1 # awk '/Jenny/ {print $0}' file | awk -F ":" '{ print $2 }' | awk -F "," '{ print $1 }'
Mon
I want to get this output using single awk command. Any help?
You can try something like:
awk -F: '/Jenny/ {split($2,a,","); print a[1]}' file
Try this
awk -F'[:,]+' '/Jenny/{print $2}' file.txt
It is using muliple -F value inside the [ ]
The + means one or more since it is treated as a regex.
For this particular job, I find grep to be slightly more robust.
Unless your company has a policy not to hire people named Eve.
(Try it out if you don't understand.)
grep -oP '^[^:]*Jenny[^:]*:\K[^,:]+' file
Or to do a whole-word match:
grep -oP '^[^:]*\bJenny\b[^:]*:\K[^,:]+' file
Or when you are confident that "Jenny" is the full name:
grep -oP '^Jenny:\K[^,:]+' file
Output:
Mon
Explanation:
The stuff up until \K speaks for itself: it selects the line(s) with the desired name.
[^,:]+ captures the day of week (in this case Mon).
\K cuts off everything preceding Mon.
-o cuts off anything following Mon.

Bash grep output filename and line no without matches

I need to get a list of matches with grep including filename and line number but without the match string
I know that grep -Hl will give only file names and grep -Hno will give filename with only matching string. But those not ideal for me. I need to get a list without match but with line no. For this grep -Hln doesn't work. I tried with grep -Hn 'pattern' | cut -d " " -f 1 But it doesn't cut the filename and line no properly.
awk can do that in single command:
awk '/pattern/ {print FILENAME ":" NR}' *.txt
You were pointing it well with cut, only that you need the : field separator. Also, I think you need the first and second group. Hence, use:
grep -Hn 'pattern' files* | cut -d: -f1,2
Sample
$ grep -Hn a a*
a:3:are
a:10:bar
a:11:that
a23:1:hiya
$ grep -Hn a a* | cut -d: -f1,2
a:3
a:10
a:11
a23:1
I guess you want this, just line numbers:
grep -nh PATTERN /path/to/file | cut -d: -f1
example output:
12
23
234
...
Unfortunately you'll need to use cut here. There is no way to do it with pure grep.
Try
grep -RHn Studio 'pattern' | awk -F: '{print $1 , ":", $2}'

How to run grep inside awk?

Suppose I have a file input.txt with few columns and few rows, the first column is the key, and a directory dir with files which contain some of these keys. I want to find all lines in the files in dir which contain these key words. At first I tried to run the command
cat input.txt | awk '{print $1}' | xargs grep dir
This doesn't work because it thinks the keys are paths on my file system. Next I tried something like
cat input.txt | awk '{system("grep -rn dir $1")}'
But this didn't work either, eventually I have to admit that even this doesn't work
cat input.txt | awk '{system("echo $1")}'
After I tried to use \ to escape the white space and the $ sign, I came here to ask for your advice, any ideas?
Of course I can do something like
for x in `cat input.txt` ; do grep -rn $x dir ; done
This is not good enough, because it takes two commands, but I want only one. This also shows why xargs doesn't work, the parameter is not the last argument
You don't need grep with awk, and you don't need cat to open files:
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' input.txt dir/*
Nor do you need xargs, or shell loops or anything else - just one simple awk command does it all.
If input.txt is not a file, then tweak the above to:
real_input_generating_command |
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' - dir/*
All it's doing is creating an array of keys from the first file (or input stream) and then looking for each key from that array in every file in the dir directory.
Try following
awk '{print $1}' input.txt | xargs -n 1 -I pattern grep -rn pattern dir
First thing you should do is research this.
Next ... you don't need to grep inside awk. That's completely redundant. It's like ... stuffing your turkey with .. a turkey.
Awk can process input and do "grep" like things itself, without the need to launch the grep command. But you don't even need to do this. Adapting your first example:
awk '{print $1}' input.txt | xargs -n 1 -I % grep % dir
This uses xargs' -I option to put xargs' input into a different place on the command line it runs. In FreeBSD or OSX, you would use a -J option instead.
But I prefer your for loop idea, converted into a while loop:
while read key junk; do grep -rn "$key" dir ; done < input.txt
Use process substitution to create a keyword "file" that you can pass to grep via the -f option:
grep -f <(awk '{print $1}' input.txt) dir/*
This will search each file in dir for lines containing keywords printed by the awk command. It's equivalent to
awk '{print $1}' input.txt > tmp.txt
grep -f tmp.txt dir/*
grep requires parameters in order: [what to search] [where to search]. You need to merge keys received from awk and pass them to grep using the \| regexp operator.
For example:
arturcz#szczaw:/tmp/s$ cat words.txt
foo
bar
fubar
foobaz
arturcz#szczaw:/tmp/s$ grep 'foo\|baz' words.txt
foo
foobaz
Finally, you will finish with:
grep `commands|to|prepare|a|keywords|list` directory
In case you still want to use grep inside awk, make sure $1, $2 etc are outside quote.
eg. this works perfectly
cat file_having_query | awk '{system("grep " $1 " file_to_be_greped")}'
// notice the space after grep and before file name

Linux cut string

In Linux (Cento OS) I have a file that contains a set of additional information that I want to removed. I want to generate a new file with all characters until to the first |.
The file has the following information:
ALFA12345|7890
Beta0-XPTO-2|30452|90 385|29
ZETA2334423 435; 2|2|90dd5|dddd29|dqe3
The output expected will be:
ALFA12345
Beta0 XPTO-2
ZETA2334423 435; 2
That is removed all characters after the character | (inclusive).
Any suggestion for a script that reads File1 and generates File2 with this specific requirement?
Try
cut -d'|' -f1 oldfile > newfile
And, to round out the "big 3", here's the awk version:
awk -F\| '{print $1}' in.dat
You can use a simple sed script.
sed 's/^\([^|]*\).*/\1/g' in.dat
ALFA12345
Beta0-XPTO-2
ZETA2334423 435; 2
Redirect to a file to capture the output.
sed 's/^\([^|]*\).*/\1/g' in.dat > out.dat
And with grep:
$ grep -o '^[^|]*' file1
ALFA12345
Beta0-XPTO-2
ZETA2334423 435; 2
$ grep -o '^[^|]*' file1 > file2

using awk on a string

can I use awk to extract the first column or any column on a string?
Actually i am using a file and reading it to a variable I want to use AWK on that variable and do my job.
How is it possible? Any suggestions.
Print first column*:
<some output producing command> | awk '{print $1}'
Print second column:
<some output producing command> | awk '{print $2}'
etc.
Where <some output producing command> is like cat filename.txt or echo $VAR, etc.
e.g. ls -l | awk '{print $9}' extracts the ninth column, which is like an ... awkward way of ls -1
*Columns are defined by the separating whitespace.
EDIT: If your text is already in a variable, something like:
VAR2=$(echo $VAR | awk '{print $9}')
would work, provided you change 9 to the desired column.

Resources