How do I loop through a directory? I know there is for f in /var/files;do echo $f;done; The problem with that is it will spit out all the files inside the directory all at once. I want to go one by one and be able to do something with the $f variable. I think the while loop would be best suited for that but I cannot figure out how to actually write the while loop.
Any help would be appreciated.
A simple loop should be working:
for file in /var/*
do
#whatever you need with "$file"
done
See bash filename expansion
To write it with a while loop you can do:
ls -f /var | while read -r file; do cmd $file; done
The primary disadvantage of this is that cmd is run in
a subshell, which causes some difficulty if you are trying
to set variables. The main advantages are that the shell
does not need to load all of the filenames into memory, and
there is no globbing. When you have a lot of files in
the directory, those advantages are important (that's
why I use -f on ls; in a large directory ls itself can
take several tens of seconds to run and -f speeds that
up appreciably. In such cases 'for file in /var/*'
will likely fail with a glob error.)
You can go without the loop:
find /path/to/dir -type f -exec /your/first/command \{\} \; -exec /your/second/command \{\} \;
HTH
Related
So script2 is:
find /some_directory -type f -not -iname "*.pdf" -exec bash -c './script "{}"' \; -print > temp_file
while read line
do
mkdir -p result"$(dirname "$line")"
cp "$line" ~/result"$(dirname "$line")"/$(basename "$line" .txt).pdf
done < temp_file
rm temp_file
the ./script is file "$1"|grep -q PDF
These two combined should find .txt files that are actually PDF files and then copy them to some result directory and rename them to .pdf file. But the files should be in: result/their/original/directories/file.pdf. (if the original was some_directory/their/original/directories/file.txt, also some directories have space in them, thus a lot of "..")
It is done, it works, but the question is: how to do it without mkdir?
I've tried many things but none seem to work and every post i read it says that it can be done just by using mkdir (but because my professor demands it to be done without mkdir, I'm quite sure it can be done. I don't HAVE to do it, but I'd like to know how it can be done).
Maybe it can be done in find with piping? (some argument that would print directories first and they would be copied one by one and then files?) I've wasted a lot of time on this and it would be a shame to quit it without correct answear.
You can copy a file while creating the leading target directories with install:
install -D src/file dest/src/file
So I am trying to write a command line shell script or a shell script that will be able to recursively loop through a directory, all its files, and sub-directories for certain files and then print the location of these files to a text file.
I know that this is possible using BASH commands such as find, locate, exec, and >.
This is what I have so far. find <top-directory> -name '*.class' -exec locate {} > location.txt \;
This does not work though. Can any BASH, Shell scripting experts help me out please?
Thank-you for reading this.
The default behavior of find (if you don't specify any other action) is to print the filename. So you can simply do:
find <top-directory> -name '*.class' > location.txt
Or if you want to be explicit about it:
find <top-directory> -name '*.class' -print > location.txt
You can save the redirection by using find's -fprint option:
find <top-directory> -name '*.class' -fprint location.txt
From the man page:
-fprint file
[...] print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated.
A less preferred way to do it is to use ls:
ls -d $PWD**/* | grep class
let's break it down:
ls -d # lists the directory (returns `.`)
ls -d $PWD # lists the directory - but this time $PWD will provide full path
ls -d $PWD/** # list the directory with full-path and every file under this directory (not recursively) - an effect which is due to `/**` part
ls -d $PWD/**/* # same like previous one, only that now do it recursively to the folders below (achieved by adding the `/*` at the end)
A better way of doing it:
After reading this due to recommendation from Charles Duffy, it appears as a bad idea to use both ls as well as find (article also says: "find is just as bad as ls in this context".) The reason it's a bad idea is because you can't control the output of ls: for example, you can't configure ls to terminate filenames with NUL. The reason it's problematic is that unix allows all kind of weird characters in a file-name (newline, pipe etc) and will "break" ls in a way you can't anticipate.
Better use a shell script for the task, and it's pretty simple task too:
Create a file my_script.sh, edit the file to contain:
for i in **/*; do
echo $PWD/$i
done
Give it execute permissions (by running: chmod +x my_script.sh).
Run it from the same directory with:
./my_script.sh
and you're good to go!
Hi all I have some problems with my script. I've read that changing the current directory from within a script is a bit of an issue. Basically I am looking for a single php file with a project folder and any sub-folders in it. And I want to change the directory to where that folder is and perform a command for it. So far no luck.
function findPHP(){
declare -a FILES
FILES=$(find ./ -name \*.php)
for file in "${FILES[#]}"
do
DIR=`dirname file`
( cd $DIR && doSomethingInThisDir &(...))
done
Any help would be greatly appreciated.
You are trying to iterate over FILES as an array, but it only has one element. In order to make the result of your subshell into an array, you can:
FILES=($(find ./ -name \*.php))
Note that it splits file names on spaces, so even though you properly quote below, it won't help. Alternatively, you could just let it split below (i.e. using your existing FILES) and use instead:
for file in $FILES
If you are using bash 4, you may want to have a look at recursive globbing... this would make it a bit easier:
for file in **/*.php
Note that you have to have the globstar shell option set, which you could enable with shopt -s globstar. This way is simpler and won't break on whitespace.
Also, you probably want $file here:
DIR=`dirname $file`
Or just use parameter expansion:
DIR=${file%/*}
There is no reason to use an array, or store the file list in anyway. If your find supports -execdir (eg gnufind 4.2.27), then use it. Otherwise, cd in a subshell as you have done:
#!/bin/bash
doSomethingInThisDir() ( cd $(dirname $1); ... )
export -f doSomethingInThisDir
find . -type f -exec bash -c 'doSomethingInThisDir {}' \;
I have defined the function using () instead of {}, but that is not necessary in this case. Normally, using () causes the function to run in a subshell, but that happens here anyway because find runs a separate process for each file.
I need to traverse a directory so starting in one directory and going deeper into difference sub directories. However I also need to be able to have access to each individual file to modify the file. Is there already a command to do this or will I have to write a script? Could someone provide some code to help me with this task? Thanks.
The find command is just the tool for that. Its -exec flag or -print0 in combination with xargs -0 allows fine-grained control over what to do with each file.
Example: Replace all foo's by bar's in all files in /tmp and subdirectories.
find /tmp -type f -exec sed -i -e 's/foo/bar/' '{}' ';'
for i in `find` ; do
if [ -d $i ] ; then do something with a directory ; fi
if [ -f $i ] ; then do something with a file etc. ; fi
done
This will return the whole tree (recursively) in the current directory in a list that the loop will go through.
This can be easily achieved by mixing find, xargs, sed (or other file modification command).
For example:
$ find /path/to/base/dir -type f -name '*.properties' | xargs sed -ie '/^#/d'
This will filter all files with file extension .properties.
The xargs command will feed the file path generated by find command into the sed command.
The sed command will delete all lines start with # in the files (feed by xargs).
Command combination in this way is very flexible.
For example, find command have different parameters so you can filter by user name, file size, file path (eg: under /test/ subfolder), file modification time.
Another dimension of flexibility is how and what to change in your file. For ex, sed command allows you to make changes on file in applying substitution (specify via regular expressions). Similarly, you can use gzip to compress the file. And so on ...
You would usually use the find command. On Linux, you have the GNU version, of course. It has many extra (and useful) options. Both will allow you to execute a command (eg a shell script) on the files as they are found.
The exact details of how to make changes to the file depend on the change you want to make to the file. That is probably best scripted, with find running the script:
POSIX or GNU:
find . -type f -exec your_script '{}' +
This will run your script once for a group of files with those names provided as arguments. If you want to do it one file at a time, replace the + with ';' (or \;).
I am assuming SearchMe is the example directory name you need to traverse completely.
I am also assuming, since it was not specified, the files you want to modify are all text file. Is this correct?
In such scenario I would suggest using the command:
find SearchMe -type f -exec vi {} \;
If you are not familiar with vi editor, just use another one (nano, emacs, kate, kwrite, gedit, etc.) and it should work as well.
Bash 4+
shopt -s globstar
for file in **
do
if [ -f "$file" ];then
# do some processing to your file here
# where the find command can't do conveniently
fi
done
I have a folder with more than 5000 images, all with JPG extension.
What i want to do, is to add recursively the "thumb_" prefix to all images.
I found a similar question: Rename Files and Directories (Add Prefix) but i only want to add the prefix to files with the JPG extension.
One of possibly solutions:
find . -name '*.jpg' -printf "'%p' '%h/thumb_%f'\n" | xargs -n2 echo mv
Principe: find all needed files, and prepare arguments for the standard mv command.
Notes:
arguments for the mv are surrounded by ' for allowing spaces in filenames.
The drawback is: this will not works with filenames what are containing ' apostrophe itself, like many mp3 files. If you need moving more strange filenames check bellow.
the above command is for dry run (only shows the mv commands with args). For real work remove the echo pretending mv.
ANY filename renaming. In the shell you need a delimiter. The problem is, than the filename (stored in a shell variable) usually can contain the delimiter itself, so:
mv $file $newfile #will fail, if the filename contains space, TAB or newline
mv "$file" "$newfile" #will fail, if the any of the filenames contains "
the correct solution are either:
prepare a filename with a proper escaping
use a scripting language what easuly understands ANY filename
Preparing the correct escaping in bash is possible with it's internal printf and %q formatting directive = print quoted. But this solution is long and boring.
IMHO, the easiest way is using perl and zero padded print0, like next.
find . -name \*.jpg -print0 | perl -MFile::Basename -0nle 'rename $_, dirname($_)."/thumb_".basename($_)'
The above using perl's power to mungle the filenames and finally renames the files.
Beware of filenames with spaces in (the for ... in ... expression trips over those), and be aware that the result of a find . ... will always start with ./ (and hence try to give you names like thumb_./file.JPG which isn't quite correct).
This is therefore not a trivial thing to get right under all circumstances. The expression I've found to work correctly (with spaces, subdirs and all that) is:
find . -iname \*.JPG -exec bash -c 'mv "$1" "`echo $1 | sed \"s/\(.*\)\//\1\/thumb/\"`"' -- '{}' \;
Even that can fall foul of certain names (with quotes in) ...
In OS X 10.8.5, find does not have the -printf option. The port that contained rename seemed to depend upon a WebkitGTK development package that was taking hours to install.
This one line, recursive file rename script worked for me:
find . -iname "*.jpg" -print | while read name; do cur_dir=$(dirname "$name"); cur_file=$(basename "$name"); mv "$name" "$cur_dir/thumb_$cur_file"; done
I was actually renaming CakePHP view files with an 'admin_' prefix, to move them all to an admin section.
You can use that same answer, just use *.jpg, instead of just *.
for file in *.JPG; do mv $file thumb_$file; done
if it's multiple directory levels under the current one:
for file in $(find . -name '*.JPG'); do mv $file $(dirname $file)/thumb_$(basename $file); done
proof:
jcomeau#intrepid:/tmp$ mkdir test test/a test/a/b test/a/b/c
jcomeau#intrepid:/tmp$ touch test/a/A.JPG test/a/b/B.JPG test/a/b/c/C.JPG
jcomeau#intrepid:/tmp$ cd test
jcomeau#intrepid:/tmp/test$ for file in $(find . -name '*.JPG'); do mv $file $(dirname $file)/thumb_$(basename $file); done
jcomeau#intrepid:/tmp/test$ find .
.
./a
./a/b
./a/b/thumb_B.JPG
./a/b/c
./a/b/c/thumb_C.JPG
./a/thumb_A.JPG
jcomeau#intrepid:/tmp/test$
Use rename for this:
rename 's/(\w{1})\.JPG$/thumb_$1\.JPG/' `find . -type f -name *.JPG`
For only jpg files in current folder
for f in `ls *.jpg` ; do mv "$f" "PRE_$f" ; done