Linux: Redirecting output of a command to "find" - linux

I have a list of file names as output of certain command.
I need to find each of these files in a given directory.
I tried following command:
ls -R /home/ABC/testDir/ | grep "\.java" | xargs find /home/ABC/someAnotherDir -iname
But it is giving me following error:
find: paths must precede expression: XYZ.java
What would be the right way to do it?

ls -R /home/ABC/testDir/ | grep -F .java |
while read f; do find . -iname "$(basename $f)"; done
You can also use ${f##*/} instead of basename. Or;
find /home/ABC/testDir -iname '*.java*' |
while read f; do find . -iname "${f##*/}"; done
Note that, undoubtedly, many people will object to parsing the output of ls or find without using a null byte as filename separater, claiming that whitespace in filenames will cause problems. Those people usually ignore newlines in filenames, and their objections can be safely ignored. (As long as you don't allow whitespace in your filenames, that is!)
A better option is:
find /home/ABC/testDir -iname '*.java' -exec find . -iname {}
The reason xargs doesn't work is that is that you cannot pass 2 arguments to -iname within find.

find /home/ABC/testDir -name "\.java"

Related

cp: invalid option -- 'D'

My goal is to find all .pdf files from multiple subfolder structures and then move them to another folder.
For this I have assembled the following.
find /mnt/user/Data/01_Persönliche_Dokumente/01_Firmen -iname \*.pdf -type f | xargs cp -t /mnt/user/Data/01_Persönliche_Dokumente/Paperless_input/
But as an error you get the following:
root#Tower:/mnt/user/Data/01_Persönliche_Dokumente/01_Firmen# find "/mnt/user/Data/01_Persönliche_Dokumente/01_Firmen" -iname \*.pdf -type f | xargs cp -t "/mnt/user/Data/01_Persönliche_Dokumente/Paperless_input"
cp: invalid option -- 'D'
Try 'cp --help' for more information.
I try diffrent options and get some help in the Unraid Discord.
I got a hint from a Friend of mine.
The correct comand looks like this:
find "/mnt/user/Data/01_Persönliche_Dokumente/01_Firmen" -iname \*.pdf -type f -print0 | xargs -0 cp -t "/mnt/user/Data/01_Persönliche_Dokumente/Paperless_input"
for the find command, I added -print0 which means:
print the full file name on the standard output, followed by a null
character (instead of the newline character that -print uses). This
allows file names that contain newlines or other types of white space
to be correctly interpreted by programs that process the find output.
for the xargs command I added 0 which means:
-0 : input items are terminated by null character instead of white spaces.
basically removing any characters that would screw with the result of find

How to grep contents from list of files from Linux ls or find command

I am running -> "find . -name '*.txt'" command and getting list of files.
I am getting below mention output:
./bsd/contrib/amd/ldap-id.txt
./bsd/contrib/expat/tests/benchmark/README.txt
./bsd/contrib/expat/tests/README.txt
./bsd/lib/libc/softfloat/README.txt
and so on,
Out of these files how can i run grep command and read contents and filter only those files which have certain keyword? for e.g. "version" in it.
xargs is a great way to accomplish this, and its already been covered.
The -exec option of find is also useful for this. It will perform a command over all files returned from find.
To invoke grep as few times as possible, passing multiple filenames to each call:
find . -name '*.txt' -exec grep -H 'foo' {} +
Alternately, to invoke grep exactly once for each file found:
find . -name '*.txt' -exec grep -H 'foo' {} ';'
In either case, {} is like a placeholder for the values from find; if your shell is zsh, it may be necessary to escape it, as in '{}'.
There are several ways to accomplish this.
If there are non-.txt files which might usefully contain the keyword:
grep -r KEYWORD *
This uses the recursive directory search option of grep.
To search only .txt files:
find . -name '*.txt' -exec grep KEYWORD {} \;
or
find . -name '*.txt' -exec grep KEYWORD {} +
or
find . -execdir grep KEYWORD {}
The first runs grep for each matching file. The second runs grep much fewer times, accumulating many matched files before invoking grep. The third form runsgrep` once in every directory.
There is usually a function built into find for that, but to be portable across platforms, I typically use xargs. Say you want to find all the xml files in or below the current directly and get a list of each occurrence of 'foo', you can do this:
find ./ -type f -name '*.xml' -print0 | xargs -0 -n 1 grep -H foo
It should be self-explanatory except for the -print0, which separates filenames with NULs rather than newlines, and the -0, which tells xargs to use those NULs rather than interpreting spaces and quotes as syntax (which can confuse it if filenames contain either).

find -exec doesn't recognize argument

I'm trying to count the total lines in the files within a directory. To do this I am trying to use a combination of find and wc. However, when I run find . -exec wc -l {}\;, I recieve the error find: missing argument to -exec. I can't see any apparent issues, any ideas?
You simply need a space between {} and \;
find . -exec wc -l {} \;
Note that if there are any sub-directories from the current location, wc will generate an error message for each of them that looks something like that:
wc: ./subdir: Is a directory
To avoid that problem, you may want to tell find to restrict the search to files :
find . -type f -exec wc -l {} \;
Another note: good idea using the -exec option . Too many times people pipe commands together thinking to get the same result, for instance here it would be :
find . -type f | xargs wc -l
The problem with piping commands in such a manner is that it breaks if any files has spaces in it. For instance here if a file name was "a b" , wc would receive "a" and then "b" separately and you would obviously get 2 error messages: a: no such file and b: no such file.
Unless you know for a fact that your file names never have any spaces in them (or non-printable characters), if you do need to pipe commands together, you need to tell all the tools you are piping together to use the NULL character (\0) as a separator instead of a space. So the previous command would become:
find . -type f -print0 | xargs -0 wc -l
With version 4.0 or later of bash, you don't need your find command at all:
shopt -s globstar
wc -l **/*
There's no simple way to skip directories, which as pointed out by Gui Rava you might want to do, unless you can differentiate files and directories by name alone. For example, maybe directories never have . in their name, while all the files have at least one extension:
wc -l **/*.*

How can I search for files in directories that contain spaces in names, using "find"?

How can I search for files in directories that contain spaces in names, using find?
i use script
#!/bin/bash
for i in `find "/tmp/1/" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
do
for j in `ls "$i" | grep sh | sed 's/\.txt//g'`
do
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
but the files and directories that contain spaces in names are not processed?
This will grab all the files that have spaces in them
$ls
more space nospace stillnospace this is space
$find -type f -name "* *"
./this is space
./more space
I don't know how to achieve you goal. But given your actual solution, the problem is not really with find but with the for loops since "spaces" are taken as delimiter between items.
find has a useful option for those cases:
from man find:
-print0
True; print the full file name on the standard output, followed by a null character
(instead of the newline character that -print uses). This allows file names
that contain newlines or other types of white space to be correctly interpreted
by programs that process the find output. This option corresponds to the -0
option of xargs.
As the man saids, this will match with the -0 option of xargs. Several other standard tools have the equivalent option. You probably have to rewrite your complex pipeline around those tools in order to process cleanly file names containing spaces.
In addition, see bash "for in" looping on null delimited string variable to learn how to use for loop with 0-terminated arguments.
Do it like this
find . -type f -name "* *"
Instead of . you can specify your path, where you want to find files with your criteria
Your first for loop is:
for i in `find "/tmp/1" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
If I understand it correctly, it is looking for all text files in the /tmp/1 directory, and then attempting to remove the file name with the sed command right? This would cause a single directory with multiple .txt files to be processed by the inner for loop more than once. Is that what you want?
Instead of using sed to get rid of the filename, you can use dirname instead. Also, later on, you use sed to get rid of the extension. You can use basename for that.
for i in `find "/tmp/1" -iname "*.txt"` ; do
path=$(dirname "$i")
for j in `ls $path | grep POD` ; do
file=$(basename "$j" .txt)
# Do what ever you want with the file
This doesn't solve the problem of having a single directory processed multiple times, but if it is an issue for you, you can use the for loop above to store the file name in an array instead and then remove duplicates with sort and uniq.
Use while read loop with null-delimited pathname output from find:
#!/bin/bash
while IFS= read -rd '' i; do
while IFS= read -rd '' j; do
find "/tmp/2/" -iname "$j.sh" -exec echo cp '{}' "$i" \;
done <(exec find "$i" -maxdepth 1 -mindepth 1 -name '*POD*' -not -name '*.txt' -printf '%f\0')
done <(exec find /tmp/1 -iname '*.txt' -not -iname '[0-9A-Za-z]*.txt' -print0)
Never used for i in $(find...) or similar as it'll fail for file names containing white space as you saw.
Use find ... | while IFS= read -r i instead.
It's hard to say without sample input and expected output but something like this might be what you need:
find "/tmp/1/" -iname "*.txt" |
while IFS= read -r i
do
i="${i%%[0-9A-Za-z]*\.txt}"
for j in "$i"/*sh*
do
j="${j%%\.txt}"
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
The above will still fail for file names that contains newlines. If you have that situation and can't fix the file names then look into the -print0 option for find, and piping it to xargs -0.

Piping find results into grep for fast directory exclusion

I am successfully using find to create a list of all files in the current subdirectory, excluding those in the subdirectory "cache." Here's my first bit of code:
find . -wholename './cach*' -prune -o -print
I now wish to pipe this into a grep command. It seems like that should be simple:
find . -wholename './cach*' -prune -o -print | xargs grep -r -R -i "samson"
... but this is returning results that are mostly from the cache directory. I've tried removing the xargs reference, but that does what you'd expect, running the grep on text of the file names, rather than on the files themselves. My goal is to find "samson" in any files that aren't cached content.
I'll probably get around this issue by just using doubled greps in this instance, but I'm very curious about why this one-liner behaves this way. I'd love to hear thoughts on a way to modify it while still using these two commands (as there are speed advantages to doing it this way).
(This is in CentOS 5, btw.)
The wholename match may be the reason why it's still including "cache" files. If you're executing the find command in the directory that contains the "cache" folder, it should work. If not, try changing it to -name '*cache*' instead.
Also, you do not need the -r or -R for your grep, that tells it to recurse through directories - but you're testing individual files.
You can update your command using the piped version, or a single-command:
find . -name '*cache*' -prune -o -print0 | xargs -0 grep -il "samson"
or
find . -name '*cache*' -prune -o -exec grep -iq "samson" {} \; -print
Note, the -l in the first command tells grep to "list the file" and not the line(s) that match. The -q in the second does the same; it tells grep to respond quietly so find will then just print the filename.
You've told grep itself to recurse (twice! -r and -R are synonyms). Since one of the arguments you're passing is . (the top directory), grep is searching in every file (some of them twice, or even more if they're in subdirectories).
If you're going to use find and grep, do this:
find . -path './cach*' -prune -o -print0 | xargs -0 grep -i "samson"
Using -print0 and -0 makes your script work even with file names that contain spaces or punctuation characters.
However, you probably don't need to bother with find here, since GNU grep is capable of excluding directories:
grep -R --exclude-dir='cach*' -i "samson" .
(This also excludes ./deeply/nested/directory/cache. If you only want to exclude cache directories at the toplevel, use find as you did.)
Use the -exec option on find instead of piping them to another command. From there you can use grep "samson" {} \; to look for samson in each file listed.
For example:
find . -wholename './cach*' -prune -o -exec grep "samson" "{}" +

Resources