Recursive search on predefined file name in different folders - linux

So I have this file structure:
/var/www/html/website1/app/config/database.php
/var/www/html/website2/app/config/database.php
...
what I need is finding a certain string in every database.php files, I've tried
grep -nrw "string" /var/www/html/*/app/config/*
but it doesn't seem to recognize the path.
I'm just wondering if there is a way to achieve what I'm after.

find /var -maxdepth 5 -mindepth 5 -name database.php -type f | xargs grep -nrw "string"
If you expect "newlines or or types of white space" in the filenames:
find /var -maxdepth 5 -mindepth 5 -name database.php -type f -print0 | xargs -0 grep -nrw "string"

If all the files you'll be searching through will follow the pattern of the ones in your question, then you don't need a recursive search, you can do this with globbing alone. Your * can exist (and be expanded) anywhere in the path, not just at the end.
grep -n -F -w "string" /var/www/html/*/app/config/database.php
This has the advantage of ONLY looking for the paths that you've specified as important.
Note that since you've said you're searching for a string rather than a RE, the -F option may be appropriate.
If this doesn't find files which you believe should exist, you might try something like this:
find /var/www/html/*/app -name database.php -print
to verify that you're actually looking in the right place for this file.

Related

How do I find the number of all .txt files in a directory and all sub directories using specifically the find command and the wc command?

So far I have this:
find -name ".txt"
I'm not quite sure how to use wc to find out the exact number of files. When using the command above, all the .txt files show up, but I need the exact number of files with the .txt extension. Please don't suggest using other commands as I'd like to specifically use find and wc. Thanks
Try:
find . -name '*.txt' | wc -l
The -l option to wc tells it to return just the number of lines.
Improvement (requires GNU find)
The above will give the wrong number if any .txt file name contains a newline character. This will work correctly with any file names:
find . -iname '*.txt' -printf '1\n' | wc -l
-printf '1\n tells find to print just the line 1 for each file name found. This avoids problems with file names having difficult characters.
Example
Let's create two .txt files, one with a newline in its name:
$ touch dir1/dir2/a.txt $'dir1/dir2/b\nc.txt'
Now, let's find the find command:
$ find . -name '*.txt'
./dir1/dir2/b?c.txt
./dir1/dir2/a.txt
To count the files:
$ find . -name '*.txt' | wc -l
3
As you can see, the answer is off by one. The improved version, however, works correctly:
$ find . -iname '*.txt' -printf '1\n' | wc -l
2
find -type f -name "*.h" -mtime +10 -print | wc -l
This worked out.

How to grep contents from list of files from Linux ls or find command

I am running -> "find . -name '*.txt'" command and getting list of files.
I am getting below mention output:
./bsd/contrib/amd/ldap-id.txt
./bsd/contrib/expat/tests/benchmark/README.txt
./bsd/contrib/expat/tests/README.txt
./bsd/lib/libc/softfloat/README.txt
and so on,
Out of these files how can i run grep command and read contents and filter only those files which have certain keyword? for e.g. "version" in it.
xargs is a great way to accomplish this, and its already been covered.
The -exec option of find is also useful for this. It will perform a command over all files returned from find.
To invoke grep as few times as possible, passing multiple filenames to each call:
find . -name '*.txt' -exec grep -H 'foo' {} +
Alternately, to invoke grep exactly once for each file found:
find . -name '*.txt' -exec grep -H 'foo' {} ';'
In either case, {} is like a placeholder for the values from find; if your shell is zsh, it may be necessary to escape it, as in '{}'.
There are several ways to accomplish this.
If there are non-.txt files which might usefully contain the keyword:
grep -r KEYWORD *
This uses the recursive directory search option of grep.
To search only .txt files:
find . -name '*.txt' -exec grep KEYWORD {} \;
or
find . -name '*.txt' -exec grep KEYWORD {} +
or
find . -execdir grep KEYWORD {}
The first runs grep for each matching file. The second runs grep much fewer times, accumulating many matched files before invoking grep. The third form runsgrep` once in every directory.
There is usually a function built into find for that, but to be portable across platforms, I typically use xargs. Say you want to find all the xml files in or below the current directly and get a list of each occurrence of 'foo', you can do this:
find ./ -type f -name '*.xml' -print0 | xargs -0 -n 1 grep -H foo
It should be self-explanatory except for the -print0, which separates filenames with NULs rather than newlines, and the -0, which tells xargs to use those NULs rather than interpreting spaces and quotes as syntax (which can confuse it if filenames contain either).

How to find total size of all files under the ownership of a user?

I'm trying to find out the total size of all files owned by a given user.
I've tried this:
find $myfolder -user $myuser -type f -exec du -ch {} +
But this gives me an error:
missing argument to exec
and I don't know how to fix it. Can somebody can help me with this?
You just need to terminate the -exec. If you want the totals for each directory
possibly -type d is required.
find $myfolder -user $myuser -type d -exec du -ch {} \;
Use:
find $myfolder -user gisi -type f -print0 | xargs -0 du -sh
where user gisi is my cat ;)
Note the option -s for summarize
Further note that I'm using find ... -print0 which on the one hand separates filenames by 0 bytes, which are one of the few characters which are not allowed in filenames, and on the other hand xargs -0 which uses the 0 byte as the delimiter. This makes sure that even exotic filenames won't be a problem.
some version of find command does not like "+" for termination of find command
use "\;" instead of "+"

Want to find any reference in any file to a certain string in linux [duplicate]

This question already has answers here:
how to find files containing a string using egrep
(7 answers)
Closed 8 years ago.
I am trying to search All .PHP files or ALL .SH files for any reference that contains:
'into tbl_free_minutes_mar'
I have command line access to the server but the files may be scattered in different directories.
For all directories everywhere,
find / -type f \( -name '*.php' -o -name '*.sh' \) \
-exec fgrep 'into tbl_free_minutes_mar' {} \+
For fewer directories elsewhere, just give a list of paths instead of /. To just list the matching files, try fgrep -l. If your file names might not always match the wildcards in the -name conditions, maybe scan all files.
find / -type f \( -name \*.php -o -name \*.sh \) -exec grep 'into tbl_free_minutes_mar' {} /dev/null \;
Change find / ... to to something less all-encompassing if you know the general area that you want to look in, e.g. find /home ...
Provided /base/path is the path where you want to start looking this will get you a list of files:
find /base/path -type f -iregex '.*\.\(php\|sh\)$' -exec grep -l 'into tbl_free_minutes_mar' '{}' \;

pipe specific list of gziped log files into zgrep

I'm having trouble with getting a list of the lines in a bunch of gzipped apache access log files. What I want is to get a list of the log files numbered 1 and 2 only, then grep through them and extract the lines with specific matching text.
I originally got this to work just for access log archives numbered 1. The "/pathname" text was the text I was looking for:
zgrep /pathname/ access_*.log.1.gz
Since ls does not support regex, I came up with the following to get a listing from the current directory of the files I want:
find . -maxdepth 1 -type f -regex '\./access.+\.log\.[1|2]\.gz' -printf '%P\n'
find . -maxdepth 1 -type f -regex '\./access.+\.log\.[1|2]\.gz' | sed "s|^\./||"
My problem now is taking that file list output and zgrepping through the files to return lines within those files that match my text. Am I barking up the wrong tree here?
Try:
zgrep /pathname/ access_*.log.{1,2}.gz
Alternatively, use find -exec:
find . -maxdepth 1 -type f -regex '\./access.+\.log\.[1|2]\.gz' -exec zgrep /path/ {} \;
I don't have apache-logs, so I use a similar, but not identical pattern:
ls /var/log/*.[12].gz
The shell doesn't support regex, but grouping with [123] or [1-3], as well as {1,2,3} and {1..3} or even {o..w} and {066..091}.

Resources