How to combine two search words with "grep" (AND) - linux

With grep, I can find a word within 50 files in a local folder, i.e.:
grep -i "hello" *.html
But how can I find files that contain TWO words? Example: I would like to find all files, that contain Word "hello" AND word "peter". How can I combine two grep's?

To see the files containing both words (possibly on different lines), use -l and xargs:
grep -il "hello" *.html | xargs grep -il "peter"
Edit
If your files have spaces in their names, then we need to be a little more careful. For that we can use special options to grep and xargs:
grep -ilZ "hello" *.html | xargs -0 grep -il "peter"

egrep -irl "hello" * |egrep -irl "peter" \`cat -`
This will search recursively in all subfolders and will match regex within the quotes

Related

How can I use grep to get all the lines that contains string1 and string2 separated by space?

Line1: .................
Line2: #hello1 #hello2 #hello3
Line3: .................
Line4: .................
Line5: #hello1 #hello4 #hello3
Line6: #hello1 #hello2 #hello3
Line7: .................
I have files that look similar in terms of lines on one of my project directories. I want to get the counts of all the lines that contain #hello1 and #hello2. In this case I would get 2 as a result only for this file. However, I want to do this recursively.
The canonical way to "do something recursively" is to use the find command. If you want to find lines that have two words on them, a simple regex will do:
grep -lr '#hello1.*#hello2' .
The option -l instructs grep to show us only filenames rather than file content, and the option -r tells grep to traverse the filesystem recursively. The start of the search is the path at the end of the line. Once you have the list of files, you can parse that list using commands run by xargs.
For example, this will count all the lines in files matching the pattern you specified.
grep -lr '#hello1.*#hello2' . | xargs -n 1 wc -l
This uses xargs to run the wc command on each of the files listed by grep. You could probably also run this without the -n 1, unless you're dealing with many many thousands of files that would exceed your maximum command line length.
Or, if I'm interpreting your question correctly, the following will count just the patterns in those files.
grep -lr '#hello1.*#hello2' . | xargs -n 1 grep -Hc '#hello1.*#hello2'
This runs a similar grep to the one used to generate your recursive list of files, and presents the output with filename (-H) and count (-c).
But if you want complex rules like finding two patterns possibly on different lines in the file, then grep probably is not the optimal tool, unless you use multiple greps launched by find:
find /path/to/base -type f \
-exec grep -q '#hello1' {} \; \
-exec grep -q '#hello2' {} \; \
-print
(Lines split for easier reading.)
This is somewhat costly, as find needs to launch up to two children for each file. So another approach would be to use awk instead:
find /path/to/base -type f \
-exec awk '/#hello1/{c++} /#hello2/{c++} c==2{r=1} END{exit 1-r}' {} \; \
-print
Alternately, if your shell is bash version 4 or above, you can avoid using find and use the bash option globstar:
$ shopt -s globstar
$ awk 'FNR=1{c=0} /#hello1/{c++} /#hello2/{c++} c==2{print FILENAME;nextfile}' **/*
Note: none of this is tested.
If you are not nterested in the number of files also,
then just something along:
find $BASEDIRECTORY -type f -print0 | xargs -0 grep -h PATTERN | wc -l
If you want to count lines containing #hello1 and #hello2 separated by space in a specific file you can:
$ grep -c '#hello1 #hello2' file
If you want to count in more than one file:
$ grep -c '#hello1 #hello2' file1 file2 ...
And if you want to get the gran total:
$ grep -c '#hello1 #hello2' file1 file2 ... | paste -s -d+ - | bc
of course you can let your shell expanding file names. So, for example:
$ grep -c '#hello1 #hello2' *.txt | paste -s -d+ - | bc
or so...
find . -type f | xargs -1 awk '/#hello1/ && /#hello2/{c++} END{print FILENAME, c+0}'

Recursively locate all files that have string "a" AND string "b" using grep

I've been using the following command to recursively search directories for a string.
grep -Rn "myString" *
I was wondering if someone would be so kind as to teach me how to search for multiple
strings in the same file recursively. That is, I want to locate all file names that have both "String1" and "String2."
If I could know the line number of each string within the file that contains both strings as well that would be great.
I've been trying several things without success. I want to start the search in a base directory and recursively search downward through all the subdirectories. If someone could help me with this, I would greatly appreciate it.
Pipe the results of your first search to grep again:
grep -RlZ "String1" . | xargs -0 grep -l "String2"
This would list the files containing both String1 and String2.
Getting the line numbers for the files containing both the strings wouldn't be probably very efficient since you need to know that a priori. One way would be to again pipe the results to grep:
grep -RlZ "String1" . | xargs -0 grep -lZ "String2" | xargs -0 grep -En 'String1|String2'
You can have find cascade the checks for you:
find . -type f -exec fgrep -q 'myString1' {} \; \
-exec fgrep -q 'myString2' {} \; \
-exec fgrep -q 'myString3' {} \; \
-print
grep --null -rl String1 . | xargs -0 grep --null -l String2 | xargs -0 grep -n -e String1 -e String2
There are a few ways to do this, but since you need files with both matching strings, you can find filenames with one match, then rescan them for the second. The first grep finds filenames with the first pattern; the second re-scans those files for the second string. Finally, a third grep prints out line numbers with matches.

open 100 files in vim

I need to grep to tons (10k+) of files for specific words.
now that returns a list of files that i also need to grep for another word.
i found on that grep can do this so i use:
grep -rl word1 *
which returns the list of files i want to check.
now from these files (100+), i need to grep another word. so i have to do another grep
vim `grep word2 `grep -rl word1 *``
but that hangs, and it does not do anything,
why?
Because you have a double `, you need to use the $()
vi `grep -l 'word2' $(grep -rl 'word1' *)`
Or you can use nested $(...) (like goblar mentioned)
vi $(grep -l 'word2' $(grep -rl 'word1' *))
grep -rl 'word1' | xargs grep -l 'word2' | xargs vi
is another option.

How to find text files not containing text on Linux?

How do I find files not containing some text on Linux? Basically I'm looking for the inverse of the following
find . -print | xargs grep -iL "somestring"
The command you quote, ironically enough does exactly what you describe.
Test it!
echo "hello" > a
echo "bye" > b
grep -iL BYE a b
Says a only.
I think you may be confusing -L and -l
find . -print | xargs grep -iL "somestring"
is the inverse of
find . -print | xargs grep -il "somestring"
By the way, consider
find . -print0 | xargs -0 grep -iL "somestring"
Or even
grep -IRiL "somestring" .
You can do it with grep alone (without find).
grep -riL "somestring" .
This is the explanation of the parameters used on grep
-L, --files-without-match
each file processed.
-R, -r, --recursive
Recursively search subdirectories listed.
-i, --ignore-case
Perform case insensitive matching.
If you use l lowercase you will get the opposite (files with matches)
-l, --files-with-matches
Only the names of files containing selected lines are written
Find the markdown file through find and grep to find the mismatch
$ find. -name '* .md' -print0 | xargs -0 grep -iL "title"
Directly use grep's -L to search for files that only contain markdown files and no titles
$ grep -iL "title" -r ./* --include '* .md'
If you use "find" the script do "grep" also in folder:
[root#vps test]# find | xargs grep -Li 1234
grep: .: Is a directory
.
./test.txt
./test2.txt
[root#vps test]#
Use the "grep" directly:
# grep -Li 1234 /root/test/*
/root/test/test2.txt
/root/test/test.txt
[root#vps test]#
or specify in "find" the options "-type f"...even if you use the find you will put more time (first the list of files and then make the grep).

Regarding grep in solaris

I want grep for a particular work in multiple files. Multiple files are stored in variable testing.
TESTING=$(ls -tr *.txt)
echo $TESTING
test.txt ab.txt bc.txt
grep "word" "$TESTING"
grep: can't open test.txt
ab.txt
bc.txt
Giving me an error. Is there any other way to do it other than for loop
Take the double quotes out from around $TESTING.
grep "word" $TESTING
The double quotes are making your whole file list expand to a single argument to grep. The right way to do this is:
find . -name \*.txt -print0 | xargs -0 grep "word"
No quotes needed I guess.
grep "word" $TESTING
works for me (Ubuntu, bash).

Resources