find all files with certain extensions then execute - linux

Why does running this command give me error message: No such file or directory ?
for i in `find ~/desktop -name '*.py'` ; do ./$i ; done

The complete error message makes it much more clear what the problem is:
bash: .//home/youruser/desktop/foo.py: No such file or directory
You can see that there is indeed no such file:
$ .//home/youruser/desktop/foo.py
bash: .//home/youruser/desktop/foo.py: No such file or directory
$ ls -l .//home/youruser/desktop/foo.py
ls: cannot access './/home/youruser/desktop/foo.py': No such file or directory
Here's instead how you can run a file /home/youruser/desktop/foo.py:
$ /home/youruser/desktop/foo.py
Hello World
So to run it in your loop, you can do:
for i in `find ~/desktop -name '*.py'` ; do $i ; done
Here's a better way of doing the same thing:
find ~/desktop -name '*.py' -exec {} \;
or with a shell loop:
find ~/desktop -name '*.py' -print0 | while IFS= read -d '' -r file; do "$file"; done
For an explanation of what ./ is and does, and why it makes no sense here, see this question

Try find and exec option. http://man7.org/linux/man-pages/man1/find.1.html
-exec command ;
Execute command; true if 0 status is returned. All following
arguments to find are taken to be arguments to the command
until an argument consisting of `;' is encountered. The
string `{}' is replaced by the current file name being
processed everywhere it occurs in the arguments to the
command, not just in arguments where it is alone, as in some
versions of find. Both of these constructions might need to
be escaped (with a `\') or quoted to protect them from
expansion by the shell. See the EXAMPLES section for examples
of the use of the -exec option. The specified command is run
once for each matched file. The command is executed in the
starting directory. There are unavoidable security problems
surrounding use of the -exec action; you should use the
-execdir option instead.
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of
invocations of the command will be much less than the number
of matched files. The command line is built in much the same
way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command, and (when find is being
invoked from a shell) it should be quoted (for example, '{}')
to protect it from interpretation by shells. The command is
executed in the starting directory. If any invocation with
the `+' form returns a non-zero value as exit status, then
find returns a non-zero exit status. If find encounters an
error, this can sometimes cause an immediate exit, so some
pending commands may not be run at all. This variant of -exec
always returns true.

The paths returned by the find statement will be absolute paths, like ~/desktop/program.py. If you put ./ in front of them, you get paths like ./~/desktop/ which don’t exist.
Replace ./$i with "$i" (the quotes to take care of file names with spaces etc.).

You should use $i and not ./$i
I was doing the same thing this exact moment. I wanted a script to find if there's any flac files in the directory and convert it to opus.
Here is my solution:
if test -n "$(find ./ -maxdepth 1 -name '*.flac' -print -quit)"
then
do this
else
do nothing
fi

Related

Simple Bash Script that recursively searches in subdirs for a certain string

i recently started learning linux because a ctf contest is coming in the next months. The problem that I struggle with is that i am trying to make a bash script that starts from a directory, checks if the content is a directory or other kind of file. If it is a file,image etc apply strings $f | grep -i 'abcdef', if it is a directory cd to that directory and start over. i have c++ experience and i understand the logic but i can't really make it work.I can't succesfully implement the loop that goes thru all the subdirectories. All help would be appreciated!
you don not need a loop for this implementation. The find command can do what you are looking after.
for instance:
find /home -type f -exec sh -c " strings {} | grep abcd " \;
explain:
/home is you base directory can be anything
-type f: means a regular file
-exec from the man page:
"Execute command; true if 0 status is returned. All
following arguments to find are taken to be arguments to
the command until an argument consisting of ;' is encountered. The string {}' is replaced by the current
file name being processed everywhere it occurs in the
arguments to the command, not just in arguments where it
is alone, as in some versions of find. Both of these
constructions might need to be escaped (with a `') or
quoted to protect them from expansion by the shell. See
the EXAMPLES section for examples of the use of the -exec
option. The specified command is run once for each
matched file. The command is executed in the starting
directory. There are unavoidable security problems
surrounding use of the -exec action; you should use the
-execdir option instead."
If you want to just find the string in a file and you do not HAVE TO first find a directory and then a file and then search, you can just simply find the text with grep.
Go to the the parent directory and execute :
grep -iR "abcd"
Or from any place,
grep -iR "abcd" /var/log/mylogs/
Suggesting a grep command on find filter results:
grep "abcd" $(find . -type f)

No such file or directory when piping. Each command works separately, but not when piping

I have 2 folders: folder_a & folder_b. In each of these folders there are a bunch of files. I am trying to use sed to move all of these files out of these folders and into my current working directory I am currently in.
My folder structure looks like this:
mytest:
a:
1.txt
2.txt
3.txt
b:
4.txt
5.txt
The command I am trying to use is:
find . -type d ! -iname '*.*' # find all folders other than root
| sed -r 's/.*/&\/*/' # add '/*' to each of the arguments
| sed -r 'p;s/.*/./' # output: a/* . b/* .
| xargs -n 2 mv # should be creating two commands: 'mv a/* .' and 'mv b/* .'
Unfortunately I get an error:
mv: cannot stat './aaa/*': No such file or directory
I also get the same error when I try this other strategy (using ls instead of mv):
for dir in */; do
ls $dir;
done;
Even if I use sed to replace the spaces in each directory name with '\ ', or surround the directory names with quotes I get the same error.
I'm not sure if these 2 examples are related in my misunderstanding of bash but they both seem to demonstrate my ignorance of how bash translates the output from one command into the input of another command.
Can anyone shed some light on this?
Update: Completely rewritten.
As #EtanReisner and #melpomene have noted, mv */* . or, more specifically, mv a/* b/* . is the most straightforward solution, but you state that this is in part a learning exercise, so the remainder of the answer shows an efficient find-based solution and explains the problem with the original command.
An efficient find-based solution
Generally, if feasible, it's best and most efficient to let find itself do the work, without involving additional tools; find's -exec action is like a built-in xargs, with {} representing the path at hand (with terminator \;) / all paths (with +):
find . -type f -exec echo mv -t . {} +
To be safe, his will just print the mv commands that would be executed; remove the echo to actually execute them.
This will execute a single[1] mv command to which all matching files are passed, and -t . moves them all to the current dir.
[1] If the resulting command line is too long (which is unlikely), it is split up into multiple commands, just as with xargs.
Operating on files (-type f) bypasses the need for globbing, as find will then enumerate all files for you (it also bypasses the need to exclude . explicitly).
Note that this solution works on entire subtrees, not just (immediate) subdirectories.
It's tempting to consider turning on Bash 4's globstar option and using mv */** ., but that won't work, because it will attempt to move directories as well, not just the files in them.
A caveat re -exec with +: it only works if {} - the placeholder for all paths - is the token immediately before the +.
Since you're on Linux, we can satisfy this condition by specifying the target folder for mv with option -t before the {}; on BSD-based systems such as OSX, you could not do that, because mv doesn't support -t there, so you'd have to use terminator \;, which means that mv is called once for every path, which is obviously much slower.
Why your command didn't work:
As #EtanReisner points out in a comment, xargs invokes the command specified without (implicitly) involving a shell, so globbing won't work; you can verify this with the following command:
echo '*' | xargs echo # -> '*' - NO globbing
If we leave the globbing issue aside, additional work would have been necessary to make your xargs command work correctly with folder names with embedded spaces (or other shell metacharacters):
find . -mindepth 1 -type d |
sed -r "s/.*/'&'\/* ./" | # -> '<input-path>'/* . (including single-quotes)
xargs -n 2 echo mv # NOTE: still won't work due to lack of globbing
Note how the (combined) sed command now produces a single output line '<input-path>'/* ., with the input path enclosed in embedded single-quotes, which is required for xargs to recognize <input-path> as a single argument, even if it contains embedded spaces.
(If your filenames contain single-quotes, you'd have to do more work; also note that since now all arguments for a given dir. are on a single line, you could use xargs -L 1 ....)
Also note how -mindepth 1 (only process paths at the subdirectory level or below) is used to skip processing of . itself.
The only way to make globbing happen is to get the shell involved:
find . -mindepth 1 -type d |
sed -r "s/.*/'&'\/* ./" | # -> '<input-path>'/* . (including single-quotes)
xargs -I {} sh -c 'echo mv {}' # works, but is inefficient
Note the use of xargs' -I option to treat each input line as its own argument ({} is a self-chosen placeholder for the input).
sh -c invokes the (default) shell to execute the resulting command, at which globbing does happen.
However, overall, this is quite inefficient:
A pipeline with 3 segments is used.
A shell instance is invoked for every input path, which in turn calls the mv utility.
Compare this to the efficient find-only solution above, which (typically) creates only 2 processes in total.

Find the name of subdirectories and process files in each

Let's say /tmp has subdirectories /test1, /test2, /test3 and so on,
and each has multiple files inside.
I have to run a while loop or for loop to find the name of the directories (in this case /test1, /test2, ...)
and run a command that processes all the files inside of each directory.
So, for example,
I have to get the directory names under /tmp which will be test1, test2, ...
For each subdirectory, I have to process the files inside of it.
How can I do this?
Clarification:
This is the command that I want to run:
find /PROD/140725_D0/ -name "*.json" -exec /tmp/test.py {} \;
where 140725_D0 is an example of one subdirectory to process - there are multiples, with different names.
So, by using a for or while loop, I want to find all subdirectories and run a command on the files in each.
The for or while loop should iteratively replace the hard-coded name 140725_D0 in the find command above.
You should be able to do with a single find command with an embedded shell command:
find /PROD -type d -execdir sh -c 'for f in *.json; do /tmp/test.py "$f"; done' \;
Note: -execdir is not POSIX-compliant, but the BSD (OSX) and GNU (Linux) versions of find support it; see below for a POSIX alternative.
The approach is to let find match directories, and then, in each matched directory, execute a shell with a file-processing loop (sh -c '<shellCmd>').
If not all subdirectories are guaranteed to have *.json files, change the shell command to for f in *.json; do [ -f "$f" ] && /tmp/test.py "$f"; done
Update: Two more considerations; tip of the hat to kenorb's answer:
By default, find processes the entire subtree of the input directory. To limit matching to immediate subdirectories, use -maxdepth 1[1]:
find /PROD -maxdepth 1 -type d ...
As stated, -execdir - which runs the command passed to it in the directory currently being processed - is not POSIX compliant; you can work around this by using -exec instead and by including a cd command with the directory path at hand ({}) in the shell command:
find /PROD -type d -exec sh -c 'cd "{}" && for f in *.json; do /tmp/test.py "$f"; done' \;
[1] Strictly speaking, you can place the -maxdepth option anywhere after the input file paths on the find command line - as an option, it is not positional. However, GNU find will issue a warning unless you place it before tests (such as -type) and actions (such as -exec).
Try the following usage of find:
find . -type d -exec sh -c 'cd "{}" && echo Do some stuff for {}, files are: $(ls *.*)' ';'
Use -maxdepth if you'd like to limit your directory levels.
You can do this using bash's subshell feature like so
for i in /tmp/test*; do
# don't do anything if there's no /test directory in /tmp
[ "$i" != "/tmp/test*" ] || continue
for j in $i/*.json; do
# don't do anything if there's nothing to run
[ "$j" != "$i/*.json" ] || continue
(cd $i && ./file_to_run)
done
done
When you wrap a command in ( and ) it starts a subshell to run the command. A subshell is exactly like starting another instance of bash except it's slightly more optimal.
You can also simply ask the shell to expand the directories/files you need, e.g. using command xargs:
echo /PROD/*/*.json | xargs -n 1 /tmp/test.py
or even using your original find command:
find /PROD/* -name "*.json" -exec /tmp/test.py {} \;
Both command will process all JSON files contained into any subdirectory of /PROD.
Another solution is to change slightly the Python code inside your script in order to accept and process multiple files.
For example, if your script contains something like:
def process(fname):
print 'Processing file', fname
if __name__ == '__main__':
import sys
process(sys.argv[1])
you could replace the last line with:
for fname in sys.argv[1:]:
process(fname)
After this simple modification, you can call your script this way:
/tmp/test.py /PROD/*/*.json
and have it process all the desired JSON files.

Linux command output as a parameter of another command

I would like to pass the output list of elements of a command as a parameter of another command. I have found some other pages:
How to display the output of a Linux command on stdout and also pipe it to another command?
Use output of bash command (with pipe) as a parameter for another command
but they seem to be more complex.
I just would like to copy a file to every result of a call to the Linux find command.
What is wrong here?:
find . -name myFile 2>&1 | cp /home/myuser/myFile $1
Thanks
This is what you want:
find . -name myFile -exec cp /home/myuser/myFile {} ';'
A breakdown / explanation of this:
find: invoking the find command
.: start search from current working directory.
Since no depth flags are specified, this will search recursively for all subfolders
-name myFile: find files with the explicit name myFile
-exec: for the search results, perform additional commands with them
cp /home/myuser/myFile {}: copies /home/myuser/myFile to overwrite each result returned by find to ; think of {} as where each search result goes.
';': used to separate different commands to be run after find
There are a couple of ways to solve this, depending on whether you need to worry about files with spaces or other special characters in their names.
If none of the filenames have spaces or special characters (they consist only of letters, numbers, dashes, and underscores), then the following is a simple solution that will work. You can use $(command) to execute a command, and substitute the results into the arguments of another command. The shell will split the result on spaces, tabs, or newlines, and for assign each value to $f in turn, and run the command on each value.
for f in $(find . -name myFile)
do
cp something $f
done
If you do have spaces or tabs, you could use find's -exec option. You pass -exec command args, putting {} where you want the filename to be substituted, and ending the arguments with a ;. You need to quote the {} and ; so that the shell doesn't interpret them.
find . -name myFile -exec cp something "{}" \;
Sometimes -exec is not sufficient. For example, in this question, they wanted to use Bash parameter expansion to compute the filename. In order to do that, you need to pass -exec bash -c 'your command', but then you will run into quoting problems with the {} substitution. To solve this, you can use -print0 from find to print the results delimited with null characters (which are invalid in filenames), and pipe it to a while read loop that splits parameters on nulls:
find . -name myFile -print0 | (while read -d $'\0' f; do
cp something "$f"
done)
The pipe will send the output of one program to the input of another. cp does not read from its input stream at the terminal, it merely uses the arguments on the command line.
You want to either use xargs with the pipe or find's exec argument instead of pipes.
find . -name myFile 2>&1 | xargs -I {} cp /home/myuser/myFile {}
Note: option -I {} defines {} as the place holder you could alternatively use someother placeholder if it conflicts with command to be executed.

renaming with find

I managed to find several files with the find command.
the files are of the type file_sakfksanf.txt, file_afsjnanfs.pdf, file_afsnjnjans.cpp,
now I want to rename them with the rename and -exec command to
mywish_sakfksanf.txt, mywish_afsjnanfs.pdf, mywish_afsnjnjans.cpp
that only the first prefix is changed. I am trying for some time, so don't blame me for being stupid.
If you read through the -exec section of the man pages for find you will come across the {} string that allows you to use the matches as arguments within -exec. This will allow you to use rename on your find matches in the following way:
find . -name 'file_*' -exec rename 's/file_/mywish_/' {} \;
From the manual:
-exec command ;
Execute command; true if 0 status is returned. All following
arguments to find are taken to be arguments to the command until an
argument consisting of ;' is encountered. The string{}' is replaced
by the current file name being processed everywhere it occurs in the
arguments to the command, not just in arguments where it is alone, as
in some versions of find. Both of these constructions might need to
be escaped (with a `\') or quoted to protect them from expansion by
the shell. See the EXAMPLES section for examples of the use of the
-exec option. The specified command is run once for each matched file. The command is executed in the starting directory.There are
unavoidable security problems surrounding use of the -exec action;
you should use the -execdir option instead.
Although you asked for a find/exec solution, as Mark Reed suggested, you might want to consider piping your results to xargs. If you do, make sure to use the -print0 option with find and either the -0 or -null option with xargs to avoid unexpected behaviour resulting from whitespace or shell metacharacters appearing in your file names. Also, consider using the + version of -exec (also in the manual) as this is the POSIX spec for find and should therefore be more portable if you are wanting to run your command elsewhere (not always true); it also builds its command line in a way similar to xargs which should result in less invocations of rename.
Don't think there's a way you can do this with just find, you'll need to create a script:
#!/bin/bash
NEW=`echo $1 | sed -e 's/file_/mywish_/'`
mv $1 ${NEW}
THen you can:
find ./ -name 'file_*' -exec my_script {} \;

Resources