find string inside a gzipped file in a folder - linux

My current problem is that I have around 10 folders, which contain gzipped files (around on an average 5 each). This makes it 50 files to open and look at.
Is there a simpler method to find out if a gzipped file inside a folder has a particular pattern or not?
zcat ABC/myzippedfile1.txt.gz | grep "pattern match"
zcat ABC/myzippedfile2.txt.gz | grep "pattern match"
Instead of writing a script, can I do the same in a single line, for all the folders and sub folders?
for f in `ls *.gz`; do echo $f; zcat $f | grep <pattern>; done;

zgrep will look in gzipped files, has a -R recursive option, and a -H show me the filename option:
zgrep -R --include=*.gz -H "pattern match" .
OS specific commands as not all arguments work across the board:
Mac 10.5+: zgrep -R --include=\*.gz -H "pattern match" .
Ubuntu 16+: zgrep -i -H "pattern match" *.gz

You don't need zcat here because there is zgrep and zegrep.
If you want to run a command over a directory hierarchy, you use find:
find . -name "*.gz" -exec zgrep ⟨pattern⟩ \{\} \;
And also “ls *.gz” is useless in for and you should just use “*.gz” in the future.

how zgrep don't support -R
I think the solution of "Nietzche-jou" could be a better answer, but I would add the option -H to show the file name something like this
find . -name "*.gz" -exec zgrep -H 'PATTERN' \{\} \;

use the find command
find . -name "*.gz" -exec zcat "{}" + |grep "test"
or try using the recursive option (-r) of zcat

Coming in a bit late on this, had a similar problem and was able to resolve using;
zcat -r /some/dir/here | grep "blah"
As detailed here;
http://manpages.ubuntu.com/manpages/quantal/man1/gzip.1.html
However, this does not show the original file that the result matched from, instead showing "(standard input)" as it's coming in from a pipe. zcat does not seem to support outputting a name either.
In terms of performance, this is what we got;
$ alias dropcache="sync && echo 3 > /proc/sys/vm/drop_caches"
$ find 09/01 | wc -l
4208
$ du -chs 09/01
24M
$ dropcache; time zcat -r 09/01 > /dev/null
real 0m3.561s
$ dropcache; time find 09/01 -iname '*.txt.gz' -exec zcat '{}' \; > /dev/null
0m38.041s
As you can see, using the find|zcat method is significantly slower than using zcat -r when dealing with even a small volume of files. I was also unable to make zcat output the file name (using -v will apparently output the filename, but not on every single line). It would appear that there isn't currently a tool that will provide both speed and name consistency with grep (i.e. the -H option).
If you need to identify the name of the file that the result belongs to, then you'll need to either write your own tool (could be done in 50 lines of Python code) or use the slower method. If you do not need to identify the name, then use zcat -r.
Hope this helps

find . -name "*.gz"|xargs zcat | grep "pattern" should do.

zgrep "string" ./*/*
You can use above command to search for string in .gz files of dir directory where dir has following sub-directories structure:
/dir
/childDir1
/file1.gz
/file2.gz
/childDir2
/file3.gz
/file4.gz
/childDir3
/file5.gz
/file6.gz

You can use this command -
zgrep "foo" $(find . -name "*.gz")

Related

How to grep through many files of same file type

I wish to grep through many (20,000) text files, each with about 1,000,000 lines each, so the faster the better.
I have tried the below code and it just doesn't seem to want to do anything, it doesn't find any matches even after an hour (it should have done by now).
for i in $(find . -name "*.txt"); do grep -Ff firstpart.txt $1; done
Ofir's answer is good. Another option:
find . -name "*.txt" -exec grep -fnFH firstpart.txt {} \;
I like to add the -n for line numbers and -H to get the filename. -H is particularly useful in this case as you could have a lot of matches.
Instead of iterating through the files in a loop, you can just give the file names to grep using xargs and let grep go over all the files.
find . -name "*.txt" | xargs grep $1
I'm not quite sure whether it will actually increase the performance, but it's probably worth a try.
ripgrep is the most amazing tool. You should get that and use it.
To search *.txt files in all directories recursively, do this:
rg -t txt -f patterns.txt
Ripgrep uses one of the fastest regular expression engines out there. It uses multiple threads. It searches directories and files, and filters them to the interesting ones in the fastest way.
It is simply great.
For anyone stuck using grep for whatever reason:
find -name '*.txt' -type f -print0 | xargs -0 -P 8 -n 8 grep -Ff patterns.txt
That tells xargs to -n 8 use 8 arguments per command and to -P 8 run 8 copies in parallel. It has the downside that the output might become interleaved and corrupted.
Instead of xargs you could use parallel which does a fancier job and keeps output in order:
$ find -name '*.txt' -type f -print0 | parallel -0 grep --with-filename grep -Ff patterns.txt

Is it possible to pipe the results of FIND to a COPY command CP?

Is it possible to pipe the results of find to a COPY command cp?
Like this:
find . -iname "*.SomeExt" | cp Destination Directory
Seeking, I always find this kind of formula such as from this post:
find . -name "*.pdf" -type f -exec cp {} ./pdfsfolder \;
This raises some questions:
Why cant you just use | pipe? isn't that what its for?
Why does everyone recommend the -exec
How do I know when to use that (exec) over pipe |?
There's a little-used option for cp: -t destination -- see the man page:
find . -iname "*.SomeExt" | xargs cp -t Directory
Good question!
why cant you just use | pipe? isn't that what its for?
You can pipe, of course, xargs is done for these cases:
find . -iname "*.SomeExt" | xargs cp Destination_Directory/
Why does everyone recommend the -exec
The -exec is good because it provides more control of exactly what you are executing. Whenever you pipe there may be problems with corner cases: file names containing spaces or new lines, etc.
how do I know when to use that (exec) over pipe | ?
It is really up to you and there can be many cases. I would use -exec whenever the action to perform is simple. I am not a very good friend of xargs, I tend to prefer an approach in which the find output is provided to a while loop, such as:
while IFS= read -r result
do
# do things with "$result"
done < <(find ...)
You can use | like below:
find . -iname "*.SomeExt" | while read line
do
cp $line DestDir/
done
Answering your questions:
| can be used to solve this issue. But as seen above, it involves a lot of code. Moreover, | will create two process - one for find and another for cp.
Instead using exec() inside find will solve the problem in a single process.
Try this:
find . -iname "*.SomeExt" -print0 | xargs -0 cp -t Directory
# ........................^^^^^^^..........^^
In case there is whitespace in filenames.
I like the spirit of the response from #fedorqui-so-stop-harming, but it needed a tweak to work in my bash terminal.
In this version...
find . -iname "*.SomeExt" | xargs cp Destination_Directory/
The cp command incorrectly takes Destination_Directory/ as the first argument. I needed to add a replacement string in order to get xargs to insert the argument in the right position for cp. I used a percent symbol for the replacement string, but you can use anything that doesn't conflict with the input from the pipe. This version works for me.
find . -iname "*.SomeExt" | xargs -I % cp % Destination_Directory/
This SOLVED my problem.
find . -type f | grep '\.pdf' | while read line
do
cp $line REPLACE_WITH_TARGET_DIRECTORY
done
If there are spaces in the filenames, try:
find . -iname *.ext > list.txt
cat list.txt | awk 'BEGIN {a="'"'"'"}{print "cp "a$0a" Directory"}' > script.sh
sh script.sh
You can inspect list.txt and script.sh before sh script.sh. Remember to delete the list.txt and script.sh afterwards.
I had some files with parenthesis and wanted a progress bar, so replaced the cat line with:
cat list.txt | awk -v X='"' '{print "rsync -Pa "X$0X" /Volumes/Untitled/"}' > script.sh

Linux command: How to 'find' only text files?

After a few searches from Google, what I come up with is:
find my_folder -type f -exec grep -l "needle text" {} \; -exec file {} \; | grep text
which is very unhandy and outputs unneeded texts such as mime type information. Any better solutions? I have lots of images and other binary files in the same folder with a lot of text files that I need to search through.
I know this is an old thread, but I stumbled across it and thought I'd share my method which I have found to be a very fast way to use find to find only non-binary files:
find . -type f -exec grep -Iq . {} \; -print
The -I option to grep tells it to immediately ignore binary files and the . option along with the -q will make it immediately match text files so it goes very fast. You can change the -print to a -print0 for piping into an xargs -0 or something if you are concerned about spaces (thanks for the tip, #lucas.werkmeister!)
Also the first dot is only necessary for certain BSD versions of find such as on OS X, but it doesn't hurt anything just having it there all the time if you want to put this in an alias or something.
EDIT: As #ruslan correctly pointed out, the -and can be omitted since it is implied.
Based on this SO question :
grep -rIl "needle text" my_folder
Why is it unhandy? If you need to use it often, and don't want to type it every time just define a bash function for it:
function findTextInAsciiFiles {
# usage: findTextInAsciiFiles DIRECTORY NEEDLE_TEXT
find "$1" -type f -exec grep -l "$2" {} \; -exec file {} \; | grep text
}
put it in your .bashrc and then just run:
findTextInAsciiFiles your_folder "needle text"
whenever you want.
EDIT to reflect OP's edit:
if you want to cut out mime informations you could just add a further stage to the pipeline that filters out mime informations. This should do the trick, by taking only what comes before :: cut -d':' -f1:
function findTextInAsciiFiles {
# usage: findTextInAsciiFiles DIRECTORY NEEDLE_TEXT
find "$1" -type f -exec grep -l "$2" {} \; -exec file {} \; | grep text | cut -d ':' -f1
}
find . -type f -print0 | xargs -0 file | grep -P text | cut -d: -f1 | xargs grep -Pil "search"
This is unfortunately not space save. Putting this into bash script makes it a bit easier.
This is space safe:
#!/bin/bash
#if [ ! "$1" ] ; then
echo "Usage: $0 <search>";
exit
fi
find . -type f -print0 \
| xargs -0 file \
| grep -P text \
| cut -d: -f1 \
| xargs -i% grep -Pil "$1" "%"
Another way of doing this:
# find . |xargs file {} \; |grep "ASCII text"
If you want empty files too:
# find . |xargs file {} \; |egrep "ASCII text|empty"
How about this:
$ grep -rl "needle text" my_folder | tr '\n' '\0' | xargs -r -0 file | grep -e ':[^:]*text[^:]*$' | grep -v -e 'executable'
If you want the filenames without the file types, just add a final sed filter.
$ grep -rl "needle text" my_folder | tr '\n' '\0' | xargs -r -0 file | grep -e ':[^:]*text[^:]*$' | grep -v -e 'executable' | sed 's|:[^:]*$||'
You can filter-out unneeded file types by adding more -e 'type' options to the last grep command.
EDIT:
If your xargs version supports the -d option, the commands above become simpler:
$ grep -rl "needle text" my_folder | xargs -d '\n' -r file | grep -e ':[^:]*text[^:]*$' | grep -v -e 'executable' | sed 's|:[^:]*$||'
Here's how I've done it ...
1 . make a small script to test if a file is plain text
istext:
#!/bin/bash
[[ "$(file -bi $1)" == *"file"* ]]
2 . use find as before
find . -type f -exec istext {} \; -exec grep -nHi mystring {} \;
Here's a simplified version with extended explanation for beginners like me who are trying to learn how to put more than one command in one line.
If you were to write out the problem in steps, it would look like this:
// For every file in this directory
// Check the filetype
// If it's an ASCII file, then print out the filename
To achieve this, we can use three UNIX commands: find, file, and grep.
find will check every file in the directory.
file will give us the filetype. In our case, we're looking for a return of 'ASCII text'
grep will look for the keyword 'ASCII' in the output from file
So how can we string these together in a single line? There are multiple ways to do it, but I find that doing it in order of our pseudo-code makes the most sense (especially to a beginner like me).
find ./ -exec file {} ";" | grep 'ASCII'
Looks complicated, but not bad when we break it down:
find ./ = look through every file in this directory. The find command prints out the filename of any file that matches the 'expression', or whatever comes after the path, which in our case is the current directory or ./
The most important thing to understand is that everything after that first bit is going to be evaluated as either True or False. If True, the file name will get printed out. If not, then the command moves on.
-exec = this flag is an option within the find command that allows us to use the result of some other command as the search expression. It's like calling a function within a function.
file {} = the command being called inside of find. The file command returns a string that tells you the filetype of a file. Regularly, it would look like this: file mytextfile.txt. In our case, we want it to use whatever file is being looked at by the find command, so we put in the curly braces {} to act as an empty variable, or parameter. In other words, we're just asking for the system to output a string for every file in the directory.
";" = this is required by find and is the punctuation mark at the end of our -exec command. See the manual for 'find' for more explanation if you need it by running man find.
| grep 'ASCII' = | is a pipe. Pipe take the output of whatever is on the left and uses it as input to whatever is on the right. It takes the output of the find command (a string that is the filetype of a single file) and tests it to see if it contains the string 'ASCII'. If it does, it returns true.
NOW, the expression to the right of find ./ will return true when the grep command returns true. Voila.
I have two issues with histumness' answer:
It only list text files. It does not actually search them as
requested. To actually search, use
find . -type f -exec grep -Iq . {} \; -and -print0 | xargs -0 grep "needle text"
It spawns a grep process for every file, which is very slow. A better solution is then
find . -type f -print0 | xargs -0 grep -IZl . | xargs -0 grep "needle text"
or simply
find . -type f -print0 | xargs -0 grep -I "needle text"
This only takes 0.2s compared to 4s for solution above (2.5GB data / 7700 files), i.e. 20x faster.
Also, nobody cited ag, the Silver Searcher or ack-grep¸as alternatives. If one of these are available, they are much better alternatives:
ag -t "needle text" # Much faster than ack
ack -t "needle text" # or ack-grep
As a last note, beware of false positives (binary files taken as text files). I already had false positive using either grep/ag/ack, so better list the matched files first before editing the files.
Although it is an old question, I think this info bellow will add to the quality of the answers here.
When ignoring files with the executable bit set, I just use this command:
find . ! -perm -111
To keep it from recursively enter into other directories:
find . -maxdepth 1 ! -perm -111
No need for pipes to mix lots of commands, just the powerful plain find command.
Disclaimer: it is not exactly what OP asked, because it doesn't check if the file is binary or not. It will, for example, filter out bash script files, that are text themselves but have the executable bit set.
That said, I hope this is useful to anyone.
I do it this way:
1) since there're too many files (~30k) to search thru, I generate the text file list daily for use via crontab using below command:
find /to/src/folder -type f -exec file {} \; | grep text | cut -d: -f1 > ~/.src_list &
2) create a function in .bashrc:
findex() {
cat ~/.src_list | xargs grep "$*" 2>/dev/null
}
Then I can use below command to do the search:
findex "needle text"
HTH:)
I prefer xargs
find . -type f | xargs grep -I "needle text"
if your filenames are weird look up using the -0 options:
find . -type f -print0 | xargs -0 grep -I "needle text"
bash example to serach text "eth0" in /etc in all text/ascii files
grep eth0 $(find /etc/ -type f -exec file {} \; | egrep -i "text|ascii" | cut -d ':' -f1)
If you are interested in finding any file type by their magic bytes using the awesome file utility combined with power of find, this can come in handy:
$ # Let's make some test files
$ mkdir ASCII-finder
$ cd ASCII-finder
$ dd if=/dev/urandom of=binary.file bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.009023 s, 116 MB/s
$ file binary.file
binary.file: data
$ echo 123 > text.txt
$ # Let the magic begin
$ find -type f -print0 | \
xargs -0 -I ## bash -c 'file "$#" | grep ASCII &>/dev/null && echo "file is ASCII: $#"' -- ##
Output:
file is ASCII: ./text.txt
Legend: $ is the interactive shell prompt where we enter our commands
You can modify the part after && to call some other script or do some other stuff inline as well, i.e. if that file contains given string, cat the entire file or look for a secondary string in it.
Explanation:
find items that are files
Make xargs feed each item as a line into one liner bash
command/script
file checks type of file by magic byte, grep checks if ASCII
exists, if so, then after && your next command executes.
find prints results null separated, this is good to escape
filenames with spaces and meta-characters in it.
xargs , using -0 option, reads them null separated, -I ##
takes each record and uses as positional parameter/args to bash
script.
-- for bash ensures whatever comes after it is an argument even
if it starts with - like -c which could otherwise be interpreted
as bash option
If you need to find types other than ASCII, simply replace grep ASCII with other type, like grep "PDF document, version 1.4"
find . -type f | xargs file | grep "ASCII text" | awk -F: '{print $1}'
Use find command to list all files, use file command to verify they are text (not tar,key), finally use awk command to filter and print the result.
How about this
find . -type f|xargs grep "needle text"

How to list specific type of files in recursive directories in shell?

How can we find specific type of files i.e. doc pdf files present in nested directories.
command I tried:
$ ls -R | grep .doc
but if there is a file name like alok.doc.txt the command will display that too which is obviously not what I want. What command should I use instead?
If you are more confortable with "ls" and "grep", you can do what you want using a regular expression in the grep command (the ending '$' character indicates that .doc must be at the end of the line. That will exclude "file.doc.txt"):
ls -R |grep "\.doc$"
More information about using grep with regular expressions in the man.
ls command output is mainly intended for reading by humans. For advanced querying for automated processing, you should use more powerful find command:
find /path -type f \( -iname "*.doc" -o -iname "*.pdf" \)
As if you have bash 4.0++
#!/bin/bash
shopt -s globstar
shopt -s nullglob
for file in **/*.{pdf,doc}
do
echo "$file"
done
find . | grep "\.doc$"
This will show the path as well.
Some of the other methods that can be used:
echo *.{pdf,docx,jpeg}
stat -c %n * | grep 'pdf\|docx\|jpeg'
We had a similar question. We wanted a list - with paths - of all the config files in the etc directory. This worked:
find /etc -type f \( -iname "*.conf" \)
It gives a nice list of all the .conf file with their path. Output looks like:
/etc/conf/server.conf
But, we wanted to DO something with ALL those files, like grep those files to find a word, or setting, in all the files. So we use
find /etc -type f \( -iname "*.conf" \) -print0 | xargs -0 grep -Hi "ServerName"
to find via grep ALL the config files in /etc that contain a setting like "ServerName" Output looks like:
/etc/conf/server.conf: ServerName "default-118_11_170_172"
Hope you find it useful.
Sid
Similarly if you prefer using the wildcard character * (not quite like the regex suggestions) you can just use ls with both the -l flag to list one file per line (like grep) and the -R flag like you had. Then you can specify the files you want to search for with *.doc
I.E. Either
ls -l -R *.doc
or if you want it to list the files on fewer lines.
ls -R *.doc
If you have files with extensions that don't match the file type, you could use the file utility.
find $PWD -type f -exec file -N \{\} \; | grep "PDF document" | awk -F: '{print $1}'
Instead of $PWD you can use the directory you want to start the search in. file prints even out he PDF version.

How do I recursively grep all directories and subdirectories?

How do I recursively grep all directories and subdirectories?
find . | xargs grep "texthere" *
grep -r "texthere" .
The first parameter represents the regular expression to search for, while the second one represents the directory that should be searched. In this case, . means the current directory.
Note: This works for GNU grep, and on some platforms like Solaris you must specifically use GNU grep as opposed to legacy implementation. For Solaris this is the ggrep command.
If you know the extension or pattern of the file you would like, another method is to use --include option:
grep -r --include "*.txt" texthere .
You can also mention files to exclude with --exclude.
Ag
If you frequently search through code, Ag (The Silver Searcher) is a much faster alternative to grep, that's customized for searching code. For instance, it's recursive by default and automatically ignores files and directories listed in .gitignore, so you don't have to keep passing the same cumbersome exclude options to grep or find.
I now always use (even on Windows with GoW -- Gnu on Windows):
grep --include="*.xxx" -nRHI "my Text to grep" *
(As noted by kronen in the comments, you can add 2>/dev/null to void permission denied outputs)
That includes the following options:
--include=PATTERN
Recurse in directories only searching file matching PATTERN.
-n, --line-number
Prefix each line of output with the line number within its input file.
(Note: phuclv adds in the comments that -n decreases performance a lot so, so you might want to skip that option)
-R, -r, --recursive
Read all files under each directory, recursively; this is equivalent to the -d recurse option.
-H, --with-filename
Print the filename for each match.
-I
Process a binary file as if it did not contain matching data;
this is equivalent to the --binary-files=without-match option.
And I can add 'i' (-nRHIi), if I want case-insensitive results.
I can get:
/home/vonc/gitpoc/passenger/gitlist/github #grep --include="*.php" -nRHI "hidden" *
src/GitList/Application.php:43: 'git.hidden' => $config->get('git', 'hidden') ? $config->get('git', 'hidden') : array(),
src/GitList/Provider/GitServiceProvider.php:21: $options['hidden'] = $app['git.hidden'];
tests/InterfaceTest.php:32: $options['hidden'] = array(self::$tmpdir . '/hiddenrepo');
vendor/klaussilveira/gitter/lib/Gitter/Client.php:20: protected $hidden;
vendor/klaussilveira/gitter/lib/Gitter/Client.php:170: * Get hidden repository list
vendor/klaussilveira/gitter/lib/Gitter/Client.php:176: return $this->hidden;
...
Also:
find ./ -type f -print0 | xargs -0 grep "foo"
but grep -r is a better answer.
globbing **
Using grep -r works, but it may overkill, especially in large folders.
For more practical usage, here is the syntax which uses globbing syntax (**):
grep "texthere" **/*.txt
which greps only specific files with pattern selected pattern. It works for supported shells such as Bash +4 or zsh.
To activate this feature, run: shopt -s globstar.
See also: How do I find all files containing specific text on Linux?
git grep
For projects under Git version control, use:
git grep "pattern"
which is much quicker.
ripgrep
For larger projects, the quickest grepping tool is ripgrep which greps files recursively by default:
rg "pattern" .
It's built on top of Rust's regex engine which uses finite automata, SIMD and aggressive literal optimizations to make searching very fast. Check the detailed analysis here.
In POSIX systems, you don't find -r parameter for grep and your grep -rn "stuff" . won't run, but if you use find command it will:
find . -type f -exec grep -n "stuff" {} \; -print
Agreed by Solaris and HP-UX.
If you only want to follow actual directories, and not symbolic links,
grep -r "thingToBeFound" directory
If you want to follow symbolic links as well as actual directories (be careful of infinite recursion),
grep -R "thing to be found" directory
Since you're trying to grep recursively, the following options may also be useful to you:
-H: outputs the filename with the line
-n: outputs the line number in the file
So if you want to find all files containing Darth Vader in the current directory or any subdirectories and capture the filename and line number, but do not want the recursion to follow symbolic links, the command would be
grep -rnH "Darth Vader" .
If you want to find all mentions of the word cat in the directory
/home/adam/Desktop/TomAndJerry
and you're currently in the directory
/home/adam/Desktop/WorldDominationPlot
and you want to capture the filename but not the line number of any instance of the string "cats", and you want the recursion to follow symbolic links if it finds them, you could run either of the following
grep -RH "cats" ../TomAndJerry #relative directory
grep -RH "cats" /home/adam/Desktop/TomAndJerry #absolute directory
Source:
running "grep --help"
A short introduction to symbolic links, for anyone reading this answer and confused by my reference to them:
https://www.nixtutor.com/freebsd/understanding-symbolic-links/
To find name of files with path recursively containing the particular string use below command
for UNIX:
find . | xargs grep "searched-string"
for Linux:
grep -r "searched-string" .
find a file on UNIX server
find . -type f -name file_name
find a file on LINUX server
find . -name file_name
just the filenames can be useful too
grep -r -l "foo" .
another syntax to grep a string in all files on a Linux system recursively
grep -irn "string"
the -r indicates a recursive search that searches for the specified string in the given directory and sub directory looking for the specified string in files, program, etc
-i ingnore case sensitive can be used to add inverted case string
-n prints the line number of the specified string
NB: this prints massive result to the console so you might need to filter the output by piping and remove less interesting bits of info it also searches binary programs so you might want to filter some of the results
ag is my favorite way to do this now github.com/ggreer/the_silver_searcher . It's basically the same thing as ack but with a few more optimizations.
Here's a short benchmark. I clear the cache before each test (cf https://askubuntu.com/questions/155768/how-do-i-clean-or-disable-the-memory-cache )
ryan#3G08$ sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
3
ryan#3G08$ time grep -r "hey ya" .
real 0m9.458s
user 0m0.368s
sys 0m3.788s
ryan#3G08:$ sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
3
ryan#3G08$ time ack-grep "hey ya" .
real 0m6.296s
user 0m0.716s
sys 0m1.056s
ryan#3G08$ sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
3
ryan#3G08$ time ag "hey ya" .
real 0m5.641s
user 0m0.356s
sys 0m3.444s
ryan#3G08$ time ag "hey ya" . #test without first clearing cache
real 0m0.154s
user 0m0.224s
sys 0m0.172s
This should work:
grep -R "texthere" *
If you are looking for a specific content in all files from a directory structure, you may use find since it is more clear what you are doing:
find -type f -exec grep -l "texthere" {} +
Note that -l (downcase of L) shows the name of the file that contains the text. Remove it if you instead want to print the match itself. Or use -H to get the file together with the match. All together, other alternatives are:
find -type f -exec grep -Hn "texthere" {} +
Where -n prints the line number.
This is the one that worked for my case on my current machine (git bash on windows 7):
find ./ -type f -iname "*.cs" -print0 | xargs -0 grep "content pattern"
I always forget the -print0 and -0 for paths with spaces.
EDIT: My preferred tool is now instead ripgrep: https://github.com/BurntSushi/ripgrep/releases . It's really fast and has better defaults (like recursive by default). Same example as my original answer but using ripgrep: rg -g "*.cs" "content pattern"
grep -r "texthere" . (notice period at the end)
(^credit: https://stackoverflow.com/a/1987928/1438029)
Clarification:
grep -r "texthere" / (recursively grep all directories and subdirectories)
grep -r "texthere" . (recursively grep these directories and subdirectories)
grep recursive
grep [options] PATTERN [FILE...]
[options]
-R, -r, --recursive
Read all files under each directory, recursively.
This is equivalent to the -d recurse or --directories=recurse option.
http://linuxcommand.org/man_pages/grep1.html
grep help
$ grep --help
$ grep --help |grep recursive
-r, --recursive like --directories=recurse
-R, --dereference-recursive
Alternatives
ack (http://beyondgrep.com/)
ag (http://github.com/ggreer/the_silver_searcher)
Throwing my two cents here. As others already mentioned grep -r doesn't work on every platform. This may sound silly but I always use git.
git grep "texthere"
Even if the directory is not staged, I just stage it and use git grep.
Below are the command for search a String recursively on Unix and Linux environment.
for UNIX command is:
find . -name "string to be searched" -exec grep "text" "{}" \;
for Linux command is:
grep -r "string to be searched" .
In 2018, you want to use ripgrep or the-silver-searcher because they are way faster than the alternatives.
Here is a directory with 336 first-level subdirectories:
% find . -maxdepth 1 -type d | wc -l
336
% time rg -w aggs -g '*.py'
...
rg -w aggs -g '*.py' 1.24s user 2.23s system 283% cpu 1.222 total
% time ag -w aggs -G '.*py$'
...
ag -w aggs -G '.*py$' 2.71s user 1.55s system 116% cpu 3.651 total
% time find ./ -type f -name '*.py' | xargs grep -w aggs
...
find ./ -type f -name '*.py' 1.34s user 5.68s system 32% cpu 21.329 total
xargs grep -w aggs 6.65s user 0.49s system 32% cpu 22.164 total
On OSX, this installs ripgrep: brew install ripgrep. This installs silver-searcher: brew install the_silver_searcher.
In my IBM AIX Server (OS version: AIX 5.2), use:
find ./ -type f -print -exec grep -n -i "stringYouWannaFind" {} \;
this will print out path/file name and relative line number in the file like:
./inc/xxxx_x.h
2865: /** Description : stringYouWannaFind */
anyway,it works for me : )
For a list of available flags:
grep --help
Returns all matches for the regexp texthere in the current directory, with the corresponding line number:
grep -rn "texthere" .
Returns all matches for texthere, starting at the root directory, with the corresponding line number and ignoring case:
grep -rni "texthere" /
flags used here:
-r recursive
-n print line number with output
-i ignore case
Note that find . -type f | xargs grep whatever sorts of solutions will run into "Argument list to long" errors when there are too many files matched by find.
The best bet is grep -r but if that isn't available, use find . -type f -exec grep -H whatever {} \; instead.
I guess this is what you're trying to write
grep myText $(find .)
and this may be something else helpful if you want to find the files grep hit
grep myText $(find .) | cut -d : -f 1 | sort | uniq
For .gz files, recursively scan all files and directories
Change file type or put *
find . -name \*.gz -print0 | xargs -0 zgrep "STRING"
Just for fun, a quick and dirty search of *.txt files if the #christangrant answer is too much to type :-)
grep -r texthere .|grep .txt
Here's a recursive (tested lightly with bash and sh) function that traverses all subfolders of a given folder ($1) and using grep searches for given string ($3) in given files ($2):
$ cat script.sh
#!/bin/sh
cd "$1"
loop () {
for i in *
do
if [ -d "$i" ]
then
# echo entering "$i"
cd "$i"
loop "$1" "$2"
fi
done
if [ -f "$1" ]
then
grep -l "$2" "$PWD/$1"
fi
cd ..
}
loop "$2" "$3"
Running it and an example output:
$ sh script start_folder filename search_string
/home/james/start_folder/dir2/filename
Get the first matched files from grep command and get all the files don't contain some word, but input files for second grep comes from result files of first grep command.
grep -l -r --include "*.js" "FIRSTWORD" * | xargs grep "SECONDwORD"
grep -l -r --include "*.js" "FIRSTWORD" * | xargs grep -L "SECONDwORD"
dc0fd654-37df-4420-8ba5-6046a9dbe406
grep -l -r --include "*.js" "SEARCHWORD" * | awk -F'/' '{print $NF}' | xargs -I{} sh -c 'echo {}; grep -l -r --include "*.html" -w --include=*.js -e {} *; echo '''
5319778a-cec2-444d-bcc4-53d33821fedb

Resources