How to know which file holds grep result? - linux

There is a directory which contains 100 text files. I used grep to search a given text in the directory as follow:
cat *.txt | grep Ya_Mahdi
and grep shows Ya_Mahdi.
I need to know which file holds the text. Is it possible?

Just get rid of cat and provide the list of files to grep:
grep Ya_Mahdi *.txt
While this would generally work, depending on the number of .txt files in that folder, the argument list for grep might get too large.
You can use find for a bullet proof solution:
find --maxdepth 1 -name '*.txt' -exec grep -H Ya_Mahdi {} +

Related

Run recursive grep using two patterns

How can I use this grep pattern to recursively search a directory? I need for both of these to be on the same line in the file the string. I keep getting the message back this is a directory. How can I make it search recursively all files with the extension .cfc?
"<cffunction" and "inject="
grep -insR "<cffunction" | grep "inject=" /c/mydirectory/
Use find and exec:
find your_dir -name "*.cfc" -type f -exec grep -insE 'inject=.*<cffunction|<cffunction.*inject=' /dev/null {} +
find finds your *.cfc files recursively and feeds into grep, picking only regular files (-type f)
inject=.*<cffunction|<cffunction.*inject= catches lines that have your patterns in either order
{} + ensures each invocation of grep gets up to ARG_MAX files
/dev/null argument to grep ensures that the output is prefixed with the name of file even when there is a single *.cfc file
You've got it backwards, you should pipe your file search to the second command, like:
grep -nisr "inject=" /c/mydirectory | grep "<cffunction"
edit: to exclude some directories and search only in *.cfc files, use:
grep -nisr --exclude-dir={/abs/dir/path,rel/dir/path} --include \*.cfc "inject=" /c/mydirectory | grep "<cffunction"

Searching and moving files

I have 9000+ XML files in a folder. I'm searching for those that contains a certain word then copy them to a certain location. I'm using the terminal:-
grep -r "the word I'm searching"
It's working but I'm looking for a better and faster way if anybody has an idea.
Easy and efficient way:
find . -name '*.xml' | xargs grep -l 'You search string' \
| xargs mv -t your_target_directory
You can do it in a single line using the following code
mv `ls | grep 'the word you are searching for' -rl` directoryname/
This works only if your directory contains only xml file.

Find files containing text using grep command starts with file name charector

I want find the file containing some text but that should start with some character.
grep -inr "my text" .
The above command will show all the files containing the above text. But what I want is if the file contains a text and the name should starts with like E*.
You can use this,
find . -name 'E*' -exec grep -Hl "sample" {} \;
Explanation:
-H : Print the file name for each match.
-l : Suppress normal output
You can combine find and grep:
find . -name "E*" | xargs grep -nH "my text"
You can also use finds exec parameter instead of xargs. Take a look at its man mange for this: man find
If you want a max-depth for 1 layer, then i think most efficient way would be...
grep <pattern> E*
for multiple levels you can use like this
grep <pattern> */*/E*

Search and replace entire files

I've seen numerous examples for replacing one string with another among multiple files but what I want to do is a bit different. Probably a lot simpler :)
Find all the files that match a certain string and replace them completely with the contents of a new file.
I have a find command that works
find /home/*/public_html -name "index.php" -exec grep "version:1.23" '{}' \; -print
This finds all the files I need to update.
Now how do I replace their entire content with the CONTENTS of /home/indexnew.txt (I could also name it /home/index.php)
I emphasize content because I don't want to change the name or ownership of the files I'm updating.
find ... | while read filename; do cat static_file > "$filename"; done
efficiency hint: use grep -q -- it will return "true" immediately when the first match is found, not having to read the entire file.
If you have a bunch of files you want to replace, and you can get all of their names using wildcards you can try piping output to the tee command:
cat my_file | tee /home/*/update.txt
This should look through all the directories in /home and write the text in my_file to update.txt in each of those directories.
Let me know if this helps or isn't what you want.
I am not sure if your command without -l and then print it is better than to add -l in grep to list file directly.
find /home/*/public_html -name "index.php" -exec grep -l "version:1.23" '{}' \; |xargs -i cp /home/index.php {}
Here is the option -l detail
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)

How can I search multiple files for a single word or phrase using grep and strings?

Im trying to look for a word like "numbers" in multiple files not just txt files using terminal. I have tried strings -r /media/E016-5484/* | grep numbers But it still doesn't work !
let say you are looking for 1234 in all files which in name contain file_pattern
grep 1234 ` find . -name "*file_pattern*"`
or
find . -name "*file_pattern*" -exec grep 1234 {} \;
If I am not mistaken, you are looking for
grep numbers -r /media/E016-5484
From the manpage:
-r, --recursive
Read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -d recurse option.

Resources