Bash command to find files who have the same name but not the same extension - linux

I want to find files that do not co-exist with another file extention” i.e. all the .c files that don’t have a corresponding .o file.
I tried find $HOME \( -name '*.c' ! -a -name '*.o' \) but it does work.

You can do the following:
Find names of all files
Strip trailing extension, if any (assuming dot is only used before extension)
Sort to group duplicates
List only duplicates
The remaining lines list files that occur with different extensions
find yourdirectory -type f | sed 's#\..*##' | sort | uniq -d
If you are only interested in extensions .c and .o, then confine the find accordingly.
find yourdirectory -type f -name '*.c' -or -name '*.o' | sed 's#\..*##' | sort | uniq -d
As it turns out, you actually wanted to know (and that should have been your question in the very beginning): "How to find .c files that have no .o file"
find yourdir -name '*.c' | sed 's#..$##' | sort > c-files
find yourdir -name '*.o' | sed 's#..$##' | sort > o-files
diff c-files o-files | grep '^<'
The final grep will filter lines that are only in the left files (c-files)

Related

Bash - Find directory containing specific logs file

I've created a script to quickly analyze some logs and automatically provide advices to solve problems based on errors found.
All works as expected.
However, it's appears that folders structure containing these logs can change (depends on system configuration) and my script not work any more.
I would like to find a way to find the directory containing a specifics files like logs or appinfo.txt file.
Once obtains I could use it as variable and finally solve my problem.
Here is an example:
AppLogDirectory ='Your_Special_Command_You_Will_HelpMe_To_Find'
grep -i "Error" $AppLogDirectory/esl*.log
Log format is: ESL.randomValue.log
Files analyzed : appinfo.txt,
system.txt etc ..
A suggested in comment section, I edit my orginal post with more detail to clarify the context, below an example:
Log files (esl.xxx.tt.ss.log ) can be in random directory, like:
/var/log/ApplicationName/logs/
/opt/ApplicationName/logs/
/var/data/ApplicationName/Extended/logs/
Because of random directory, I need to find a solution to print the directory names of the files that match esl*.log patter (without esl filename)
Use find and pass the output to xargs with grep, like so, which runs grep on multiple files and prints the output together with the file name where the pattern was found:
find /path/to/files /another/path/to/other/files \( -name 'appinfo.txt' -o -name 'system.txt' -o -name 'esl*.log' \) -print0 | xargs -0 grep -i 'Error'
Or simply use -exec ... \+, which gives the same effect, without the need for xargs:
find /path/to/files /another/path/to/other/files \( -name 'appinfo.txt' -o -name 'system.txt' -o -name 'esl*.log' -exec grep -i 'Error' \+
To find the directories which contain the files that contain the desired pattern, use grep -l to print file names only (not the lines that match), and pipe the results to xargs dirname to print the directory names. If you need the unique dir names, pipe it further to sort -u:
find /path/to/files /another/path/to/other/files \( -name 'appinfo.txt' -o -name 'system.txt' -o -name 'esl*.log' -exec grep -il 'Error' \+ | xargs dirname | sort -u
SEE ALSO:
GNU find manual
To search for files based on their contents
xargs
Solution found thanks to you thank you again!
#Ask for extracted tar.gz folder
read -p "Where did you extract the tar.gz file? r1
#directory path where esl files is located
logpath=`find $r1 -name "esl*.log" | xargs dirname | sort -u`
#Search value (here "Error") into all esl*.log
grep 'Error' $logpath/esl*.log | awk '{print $8}'

How do I find the number of all .txt files in a directory and all sub directories using specifically the find command and the wc command?

So far I have this:
find -name ".txt"
I'm not quite sure how to use wc to find out the exact number of files. When using the command above, all the .txt files show up, but I need the exact number of files with the .txt extension. Please don't suggest using other commands as I'd like to specifically use find and wc. Thanks
Try:
find . -name '*.txt' | wc -l
The -l option to wc tells it to return just the number of lines.
Improvement (requires GNU find)
The above will give the wrong number if any .txt file name contains a newline character. This will work correctly with any file names:
find . -iname '*.txt' -printf '1\n' | wc -l
-printf '1\n tells find to print just the line 1 for each file name found. This avoids problems with file names having difficult characters.
Example
Let's create two .txt files, one with a newline in its name:
$ touch dir1/dir2/a.txt $'dir1/dir2/b\nc.txt'
Now, let's find the find command:
$ find . -name '*.txt'
./dir1/dir2/b?c.txt
./dir1/dir2/a.txt
To count the files:
$ find . -name '*.txt' | wc -l
3
As you can see, the answer is off by one. The improved version, however, works correctly:
$ find . -iname '*.txt' -printf '1\n' | wc -l
2
find -type f -name "*.h" -mtime +10 -print | wc -l
This worked out.

How to use find command to find the directory of a filename and remove duplicates?

I'm using find / -name "*.dbf" to find the directories of all .dbf files.
It gives me the directories and the filenames.
The output should be only the directories with no duplicates. I don't need to see the filenames.
You can pipe the result through dirname and then remove duplicates like this:
find / -name \*.dbf -print0 | xargs -0 -n1 dirname | sort | uniq
Another solution: find / -name "*.dbf" -exec dirname {} \; 2> /dev/null | sort -u
I can understand your question in two ways:
To find only directories matching the <name_pattern> with no duplicates, you can use the -type option of the find piped into a sort | uniq:
find / -name '<name_pattern>' -type d | sort | uniq
To find all the files, but only return the directories including the matching files with no duplicates:
find / -name '<name_pattern>' | perl -pe 's/(.*\/).*$/$1/' | sort | uniq

Unix display info about files matching one of two patterns

I'm trying to display on a Unix system recursively all the files that start with an a or ends with an a with some info about them: name, size and last modified.
I tried find . -name "*a" -o -name "a*" and it displays all the files okay but when I add -printf "%p %s" it displays only one result.
If you want the same action to apply to both patterns, you need to group them with parentheses. Also, you should add a newline to printf, otherwise all of the output will be on one line:
find . \( -name "*a" -o -name "a*" \) -printf "%p %s\n"
find . -name "*.c" -o -name "*.hh" | xargs ls -l | awk '{print $9,$6,$7,$8,$5}'

Shell command for counting words in files

I want to run a command, that will count the number of words in all file. (From the selected number of files)
If i do like, find ABG-Development/ -name "*.php" | grep "<?" | wc -l , it will search only in the filename not the file contents.
And I tried one more way like
find ABG-Development/ -name "*.php" -exec grep "<?" {} \; | wc -l, I got error.
In above example I need how many time "
Please help..
use xargs
find ABG-Development/ -name "*.php" -print0 | xargs -0 grep "<?" | wc -l

Resources