I'm trying to grep folders and make a variable of the result for further need - linux

GET_DIR=$ (find ${FIND_ROOT} -type -d 2>/dev/null | grep -Eiv ${EX_PATTERN| grep -Eio ${FIND_PATTERN})
but somehow when I try to print the result, its empty.
But when I am using my grep without a script I got results on the Command line.

You could avoid the pipe | and grep by using name or iname (case insensitive) within find, for example:
find /tmp -type d -iname "*foo*"
This will find directories -type d that match the pattern *foo* ignoring case -iname in /tmp
To save the output in a variable you could use:
FOO=$(find /tmp -type d -iname "*foo*")
From the find man:
-name pattern
True if the last component of the pathname being examined matches pattern. Special shell pattern matching
characters (``['', ``]'', ``*'', and ``?'') may be used as part of pattern. These characters may be matched
explicitly by escaping them with a backslash (``\'').

consider using xargs :
GET_DIR=$ (find ${FIND_ROOT} -type -d 2>/dev/null | xargs grep -Eiv ${EX_PATTERN| grep -Eio ${FIND_PATTERN})

Related

Count all occurrences of a tag in lots of log files using grep

I need to get the quantity of tags, for example "103=16" found in lots of files, how many of them are, but only the files that have one or more occurrences
I'm using:
find . /opt/FIXLOGS/l51prdsrv\* -iname "TRADX\_*oe*.log" -type f -exec grep -F 103=16 -c {} /dev/null ;
which finds the file where the tag is and shows the number of matches, but it also shows the 0 occurrences
returns
file1.log:0
file2.log:0
file3.log:6
file4.log:0
using a -i to exclude the 0 or grep -v :0 haven't worked for me, gets the result:
grep: :0: No such file or directory
How can I get only the files where the count is more than 0?
Have you tried piping into grep to negate the ones with zeroes after the find/exec?
E.g., like this works for me:
find . -type f -iname "TRADX_oe.log" -exec grep -cFH "103=16" {} \; | grep -v ":0"
Using awk to do everything in one place
find . -type f -iname "TRADX_oe.log" -exec awk '/103=16/{c++} END { if(c)print FILENAME, c}' {} \;
That is the way how -c option of grep works:
-c, --count
Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match
option (see below), count non-matching lines.
So it will print 0 counts, only option is to remove 0 with another grep using -v or use awk:
awk '/search_pattern/{f[FILENAME]+=1} END {for(i in f){print i":"f[i]}}' /path/to/files*
It worked when i pipe the grep after the ; excluding the zero | grep -v ":0"
ending like this:
find . route -iname "TRAD_oe.log" -type f -exec grep -cHF "103=16" {} ; | grep -v ":0"

Find and show information from logs inside a folder in linux

I'm trying to create a little script using bash in linux. That allows me to find if there is any tag 103=16 inside a log
I have multiple folders named for example l51prdsrv-api1.nebex.local, l51prdsrv-oe1.nebex.local, etc... inside those folders are .log files like TRADX_gsoe3.log, TRADX_gseuoe2.log, etc... .
I need to find if inside those logs there is the tag 103=16
I'm trying this command
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f | grep -e 103=16
But what it does is that is showing just the logs names and not the content to see if there is a tag 103=16
First of all, you are not searching files of the form TRADX_something.log, but only files which are just named TRADX_ (case-insensitively, so TradX_ would also be found).
Then you are feeding to grep the names of the files, but never look into the content of those files. From the grep man page, you see that the file content can be supplied either via stdin, or by specifying the file name on the command line. In your case, the latter is the way to go. Therefore you can either do a
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_*.log" -type f -exec grep -F 103=16 {} \;
if you are only interested in the matchin lines, or a
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_*.log" -type f -exec grep -F 103=16 {} /dev/null \;
if you also want to see the file names where the pattern matches. The reason is that grep is printing the filename only if it sees more than 1 filename on the command line and the /dev/null provides a second dummy file. find replaces the {} by the filename.
BTW, I used -f for grep instead of your -e, because you don't seem to use any specific regular expression pattern anyway.
But you don't need find for this task. An alternative would be an explicit loop:
shopt -s nocasematch # make globbing case-insensitive
shopt -s globstar # turn on ** globbing
for f in {.,/opt/FIXLOGS/l51prdsrv*}/**/tradx_*.log
do
[[ -f $f ]] && grep -F 103=16 "$f" /dev/null
done
While the loop looks more complicated at first glance, it is easier to extend the logic in case you want to do more with the files instead of just grepping the lines, for instance taking specific actions on those files which contain the pattern.
You are doing:
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f | grep -e 103=16
I propose you do:
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f -exec grep -e "103=16" {} /dev/null \;
What's the difference?
find ... -type f
=> gives you a list of files.
When you add | grep -e 103=16, then you perform that on the filenames.
When you add -exec grep ..., then you perform that on the files itselfs.

search a string in a file with case insensitive file name

I want to grep for a string in all the files which have a particular patter in their name and is case-insensitive.
For eg if I have two files ABC.txt and aBc.txt, then I want something like
grep -i 'test' *ABC*
The above command should look in both the files.
You can use find and then grep on the results of that:
find . -iname "*ABC*" -exec grep -i "test" {} \;
Note that this will run grep once on each file found. If you want to run grep once on all the files (in which case you risk running into the command line length limit), you can use a plus at the end:
find . -iname "*ABC*" -exec grep -i "test" {} \+
You can also use xargs to process a really large number of results more efficiently:
find . -iname "*ABC*" -print0 | xargs -0 grep -i test
The -print0 makes find output 0-terminated results, and the -0 makes xargs able to deal with this format, which means you don't need to worry about any special characters in the filenames. However, it is not totally portable, since it's a GNU extension.
If you don't have a find that supports -print0 (for example SVR4), you can still use -exec as above or just
find . -iname "*ABC*" | xargs grep -i test
But you should be sure your filenames don't have newlines in them, otherwise xargs will treat each line of a filename as a new argument.
You should use find to match file and search string that you want with command grep which support regular expression, for your question, you should input command like below:
find . -name "*ABC*" -exec grep \<test\> {} \;

How to count the number of files whose name contains a vowel

I was trying to code a script that counts the number of files with a vowel in a directory.
If I use
find $1 -type f | wc -l
I get the number of files in the directory $1, but I do not know how to use grep to count just the one with a vowel, I was trying something like this
find $1 -type f | grep -l '[a,e,i,o,u,A,E,I,O,U]' | wc -l
You can use this gnu find command to count all the files with at least one vowel:
find . -maxdepth 1 -type f -iname '*[aeiou]*' -printf ".\n" | wc -l
-iname '*[aeiou]*' glob pattern will match only filename with at least one of the a,e,i,o,u (ignore case).
Remove -maxdepth 1 if you want to count files recursively in sub directories as well.
If you can accept counting directories:
ls -d *a* *e* *i* *o* *u* *y* *A* *E* *I* *O* *U* *Y* | wc -l
Otherwise:
find $1 -type f | grep -i '[aeiouy]' | wc -l
Your attempt fails for two reasons. First, -l does not make sense if grep is reading in a pipeline, since the purpose of -l is to print only the input file that matched, but in this case the only input file is stdin. Second, your syntax is wrong. Try:
... | grep -i '[aeiou]' | ...
Please don't use commas in a character group expression (the thing in [] brackets)
The best way is to first do a find(1) to get the files you want to scan. Then you need the base names, as the path info is not valid. Finally, you need to grep with [aeiouAEIOU] to get only the lines with a vowel in, and finally use wc(1) to count lines.
find ${DIRECTORY} -type f -print | sed -e 's#^.*/##' | grep '[aeiouAEIOU]' | wc -l
-type f allows you to select just files (not directories). The sed(1) command edits the output, line by line, eliminating the first part of the name up to the last / character. The grep filters names with at least one vowel and discards the others, and finally wc -l counts the lines.

Unix Command to List files containing string but *NOT* containing another string

How do I recursively view a list of files that has one string and specifically doesn't have another string? Also, I mean to evaluate the text of the files, not the filenames.
Conclusion:
As per comments, I ended up using:
find . -name "*.html" -exec grep -lR 'base\-maps' {} \; | xargs grep -L 'base\-maps\-bot'
This returned files with "base-maps" and not "base-maps-bot". Thank you!!
Try this:
grep -rl <string-to-match> | xargs grep -L <string-not-to-match>
Explanation: grep -lr makes grep recursively (r) output a list (l) of all files that contain <string-to-match>. xargs loops over these files, calling grep -L on each one of them. grep -L will only output the filename when the file does not contain <string-not-to-match>.
The use of xargs in the answers above is not necessary; you can achieve the same thing like this:
find . -type f -exec grep -q <string-to-match> {} \; -not -exec grep -q <string-not-to-match> {} \; -print
grep -q means run quietly but return an exit code indicating whether a match was found; find can then use that exit code to determine whether to keep executing the rest of its options. If -exec grep -q <string-to-match> {} \; returns 0, then it will go on to execute -not -exec grep -q <string-not-to-match>{} \;. If that also returns 0, it will go on to execute -print, which prints the name of the file.
As another answer has noted, using find in this way has major advantages over grep -Rl where you only want to search files of a certain type. If, on the other hand, you really want to search all files, grep -Rl is probably quicker, as it uses one grep process to perform the first filter for all files, instead of a separate grep process for each file.
These answers seem off as the match BOTH strings. The following command should work better:
grep -l <string-to-match> * | xargs grep -c <string-not-to-match> | grep '\:0'
Here is a more generic construction:
find . -name <nameFilter> -print0 | xargs -0 grep -Z -l <patternYes> | xargs -0 grep -L <patternNo>
This command outputs files whose name matches <nameFilter> (adjust find predicates as you need) which contain <patternYes>, but do not contain <patternNo>.
The enhancements are:
It works with filenames containing whitespace.
It lets you filter files by name.
If you don't need to filter by name (one often wants to consider all the files in current directory), you can strip find and add -R to the first grep:
grep -R -Z -l <patternYes> | xargs -0 grep -L <patternNo>
find . -maxdepth 1 -name "*.py" -exec grep -L "string-not-to-match" {} \;
This Command will get all ".py" files that don't contain "string-not-to-match" at same directory.
To match string A and exclude strings B & C being present in the same line I use, and quotes to allow search string to contain a space
grep -r <string A> | grep -v -e <string B> -e "<string C>" | awk -F ':' '{print $1}'
Explanation: grep -r recursively filters all lines matching in output format
filename: line
To exclude (grep -v) from those lines the ones that also contain either -e string B or -e string C. awk is used to print only the first field (the filename) using the colon as fieldseparator -F

Resources