Hi all I have some problems with my script. I've read that changing the current directory from within a script is a bit of an issue. Basically I am looking for a single php file with a project folder and any sub-folders in it. And I want to change the directory to where that folder is and perform a command for it. So far no luck.
function findPHP(){
declare -a FILES
FILES=$(find ./ -name \*.php)
for file in "${FILES[#]}"
do
DIR=`dirname file`
( cd $DIR && doSomethingInThisDir &(...))
done
Any help would be greatly appreciated.
You are trying to iterate over FILES as an array, but it only has one element. In order to make the result of your subshell into an array, you can:
FILES=($(find ./ -name \*.php))
Note that it splits file names on spaces, so even though you properly quote below, it won't help. Alternatively, you could just let it split below (i.e. using your existing FILES) and use instead:
for file in $FILES
If you are using bash 4, you may want to have a look at recursive globbing... this would make it a bit easier:
for file in **/*.php
Note that you have to have the globstar shell option set, which you could enable with shopt -s globstar. This way is simpler and won't break on whitespace.
Also, you probably want $file here:
DIR=`dirname $file`
Or just use parameter expansion:
DIR=${file%/*}
There is no reason to use an array, or store the file list in anyway. If your find supports -execdir (eg gnufind 4.2.27), then use it. Otherwise, cd in a subshell as you have done:
#!/bin/bash
doSomethingInThisDir() ( cd $(dirname $1); ... )
export -f doSomethingInThisDir
find . -type f -exec bash -c 'doSomethingInThisDir {}' \;
I have defined the function using () instead of {}, but that is not necessary in this case. Normally, using () causes the function to run in a subshell, but that happens here anyway because find runs a separate process for each file.
Related
I have a script called summarize.sh which produces a summary of the file/dirs inside of a directory. I would like to have it run recursively down the whole tree from the top. Whats a good way to do this?
I have tried to loop it with a for loop with
for dir in */; do
cd $dir
./summarize.sh
cd ..
however it returns ./summarize.sh: no file or directory
Is it because I am not moving the script as I run it? I am not very familiar with unix directories.
You can recursively list files using find . -type f and make your script take the interested file as a first argument, so you can do find . -type f -exec myScript.sh {} \;
If you want directories only, use find . -type d instead, or if you want both use just find . without restriction.
Additional option by name, e.g. find . -name '*.py'
Finally, if you do not want to recurse down the directory structure, i.e. only summarize the top level, you can use -maxdepth 1 option, so something like find . -type d -maxdepth 1 -exec myScript.sh {} \;.
The issue is that you are changing to a different directory with the cd command while your summarize.sh script is not located in these directories. One possible solution is to use an absolute path instead of a relative one. For example, change:
./summarize.sh
to something like:
/path/to/file/summarize.sh
Alternatively, under the given example code, you can also use a relative path pointing to the previous directory like this:
../summarize.sh
Try this code if you are running Bash 4.0 or later:
#! /bin/bash -p
shopt -s nullglob # Globs expand to nothing when they match nothing
shopt -s globstar # Enable ** to expand over the directory hierarchy
summarizer_path=$PWD/summarize.sh
for dir in **/ ; do
cd -- "$dir"
"$summarizer_path"
cd - >/dev/null
done
shopt -s nullglob avoids an error in case there are no directories under the current one.
The summarizer_path variable is set to an absolute path for the summarize.sh program. That is necessary to allow it to be run in directories other than the current one. (./summarize.sh only works in the current directory, ..)
Use cd -- ... to avoid problems if any directory name begins with '-'.
cd - >/dev/null to cd to the previous directory, and throw away its path when it is output by cd -.
Shellcheck issues several warnings about the code above, all to do with the use of cd. I'd fix them for "real" code.
I want to copy executable files from one directory to another.
The source directory includes all sorts of files I don't care about (build artifacts). I want to grab only the executable files, using a bash script that works on both OS X and Linux.
By executable, I mean a file that has the executable permission, and would pass test -x $filename.
I know I can write some python script but then I would be introducing a dependency on python (just to copy files!) which is something I really want to avoid.
Note: I've seem a couple of similar questions but the answers seem to only work on Linux (as the questions specifically asked about Linux). Please do not mark this as duplicate unless the duplicate question is indeed about cross platform copying of executable files only.
Your own answer is conceptually elegant, but slow, because it creates at least one child process for every input file (test), plus an additional one for each matching file (cp).
Here's a more efficient bash alternative that:
builds up an array of matching input files in shell code,
and then copies them using a single invocation of cp.
#!/usr/bin/env bash
exeFiles=()
for f in "$src_dir"/*; do [[ -x $f && -f $f ]] && exeFiles+=( "$f" ); done
cp "${exeFiles[#]}" "$dest_dir/"
exeFiles=() initializes the array in which to store the matching filenames.
for f in "$src_dir"/* loops over all files and directories located directly in $scr_dir; note how * must be unquoted for globbing (filename expansion) to occur.
[[ -x $f && -f $f ]] determines whether the item at hand is executable (-x) and a (regular) file -f; note that double-quoting variable references inside [[ ... ]] is (mostly) optional.
exeFiles+=( "$f" ) appends a new element to the array
"${exeFiles[#]}" refers to the resulting array as a whole and robustly expands to the array elements as individual arguments - see Bash arrays.
After some experimentation, this seems to work on both OS X and Ubuntu
find "$src_dir" -maxdepth 1 -type f -exec test -x {} \; -exec cp {} "$dest_dir/" \;
Note that the -maxdepth 1 is specific to my use case where I don't care about recursively going through all the directories.
-type f is necessary because directories also count as executables
I pass two -exec flags. The exec flag not only executes the command but also counts as a filter so that if the command returns a non-zero exit code, the file is filtered out.
The way to use exec is write out whatever command you want, use {} to supply the current file, and terminate with \;
The first -exec returns a success exit code only if the file is executable.
The second -exec performs the copy, but it's not executed if the first -exec fails.
Let's say you have a first.sh file in a directory: "/home/userbob/scripts/foo/". Basically I would like to know how to loop through specific directories, each time going back up to a higher level directory and repeating.
The .sh file has something like this pseudocode:
#!/bin/bash
curdi={$PATH} #where the first.sh file sits on the server
FOLDERS="$curdi/waffles/inner/
$curdi/pancakes/inner/
$curdi/bagels/inner/"
for f in $FOLDERS
do
cd $f
cp innerofinner/* .
cd $curdi
done
The idea is to somehow copy all the contents of /home/userbob/scripts/foo/waffles/inner/innerofinner to /home/userbob/scripts/foo/waffles/inner/
(and basically repeating just with the path having pancakes, bagels.etc.)
Can't do it for all directories (*) under /home/userbob/scripts/foo/ because there are some that I don't want to copy.
This should do it:
for name in waffles pancakes bagels
do
cp "$curdi/$name/inner/innferofinner/"* "$curdi/waffles/inner"
done
Walking file trees? Sounds like a job for find!
#!/usr/local/bin/env bash
# only environment variables should be all-caps
dirs=({bagels,pancakes}/inner)
find "${dirs[#]}" -type d -maxdepth 1 -mindepth 1 -name innerofinner -execdir bash -c 'cp "$1"/* .' -- {} \;
I did a partial path and assumed a working directory of /home/userbob/scripts/foo. An absolute path would work, too, and would look like
dirs=(/home/userbob/scripts/foo/{bagels,pancakes}/inner)
This finds all directories exactly one level below the listed directory that are named "innerofinner" and, in their parent directories, executs bash and a simple cp script.
If you're wondering how this works, read below.
The dirs=() syntax creates an empty array named dirs. dirs+(a b) creates an array with a at index 0 and b at index 1. Any whitespace-delimited string will work, here. In a shell script {a,b,c} expands to a b c but A{a,b,c}B expands to AaB AbB AcB. So specifying {bagels,pancakes}/inner is just a way to say both bagels/inner and pancakes/inner without having to type as much.
A variable in bash can be expanded with $foo or with ${foo}; these are the same. An array in shell can be expanded to all of its elements with ${foo[#]} delimited by spaces (if you know perl or php this will make some sense) and quoting the expansion (always a good idea in shell!) prevents spaces innside the variable from being processed again by the shell. Thus, "${dir[#]}" becomes bagels/inner pancakes/inner.
Knowing this we see that the find command has become find bagels/inner pancakes/inner -maxdepth 1 -mindepth 1 -type d -name innerofinner and if you execute this it will return exactly two lines: both full paths to each innerofinner directory. All we want now is to do something for each one, which -execdir does nicely.
Use a recursive function or invoke the script recursively.
I am not sure if I understand your problem statement correctly. Your psuedo code seems good. But, I see a problem with the following line.
curdi={$PWD}
It does not give you the directory where the script resides but gives the directory you are in. If your script directory is in the path and you are running the script from your home directory then $curdi would point to your home directory and not the directory where your script resides. This will lead to undesired results.
Incidentally, if you really wanted to do it in the way that your pseudo-script attempts it, you'd do it like this
#!/usr/bin/env bash
for f in "$PWD"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
# if you know for sure that it's one level up
cd ..
done
Presuming that $PWD is a good enough indicator of "current" directory for you. Me, I'd pass it in to the script.
#!/usr/bin/env bash
base="${1-$PWD}"
for f in "$base"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
cd ..
done
at call it like
breakfast.sh /home/userbob/scripts/foo/
find . \( -iname '*waffles*innferofinner*' -o \
-iname '*pancakes*innferofinner*' -o \
-iname '*baggels*innferofinner*' \) \
-type f \
-exec cp {} "`echo {} | sed 's:\(.*\)/[^/]\+/[^/]\+:\1:'` \;
Should do. Finds every file in the desired subdirs, then copies it based on its name.
HTH
I need to traverse a directory so starting in one directory and going deeper into difference sub directories. However I also need to be able to have access to each individual file to modify the file. Is there already a command to do this or will I have to write a script? Could someone provide some code to help me with this task? Thanks.
The find command is just the tool for that. Its -exec flag or -print0 in combination with xargs -0 allows fine-grained control over what to do with each file.
Example: Replace all foo's by bar's in all files in /tmp and subdirectories.
find /tmp -type f -exec sed -i -e 's/foo/bar/' '{}' ';'
for i in `find` ; do
if [ -d $i ] ; then do something with a directory ; fi
if [ -f $i ] ; then do something with a file etc. ; fi
done
This will return the whole tree (recursively) in the current directory in a list that the loop will go through.
This can be easily achieved by mixing find, xargs, sed (or other file modification command).
For example:
$ find /path/to/base/dir -type f -name '*.properties' | xargs sed -ie '/^#/d'
This will filter all files with file extension .properties.
The xargs command will feed the file path generated by find command into the sed command.
The sed command will delete all lines start with # in the files (feed by xargs).
Command combination in this way is very flexible.
For example, find command have different parameters so you can filter by user name, file size, file path (eg: under /test/ subfolder), file modification time.
Another dimension of flexibility is how and what to change in your file. For ex, sed command allows you to make changes on file in applying substitution (specify via regular expressions). Similarly, you can use gzip to compress the file. And so on ...
You would usually use the find command. On Linux, you have the GNU version, of course. It has many extra (and useful) options. Both will allow you to execute a command (eg a shell script) on the files as they are found.
The exact details of how to make changes to the file depend on the change you want to make to the file. That is probably best scripted, with find running the script:
POSIX or GNU:
find . -type f -exec your_script '{}' +
This will run your script once for a group of files with those names provided as arguments. If you want to do it one file at a time, replace the + with ';' (or \;).
I am assuming SearchMe is the example directory name you need to traverse completely.
I am also assuming, since it was not specified, the files you want to modify are all text file. Is this correct?
In such scenario I would suggest using the command:
find SearchMe -type f -exec vi {} \;
If you are not familiar with vi editor, just use another one (nano, emacs, kate, kwrite, gedit, etc.) and it should work as well.
Bash 4+
shopt -s globstar
for file in **
do
if [ -f "$file" ];then
# do some processing to your file here
# where the find command can't do conveniently
fi
done
How do I loop through a directory? I know there is for f in /var/files;do echo $f;done; The problem with that is it will spit out all the files inside the directory all at once. I want to go one by one and be able to do something with the $f variable. I think the while loop would be best suited for that but I cannot figure out how to actually write the while loop.
Any help would be appreciated.
A simple loop should be working:
for file in /var/*
do
#whatever you need with "$file"
done
See bash filename expansion
To write it with a while loop you can do:
ls -f /var | while read -r file; do cmd $file; done
The primary disadvantage of this is that cmd is run in
a subshell, which causes some difficulty if you are trying
to set variables. The main advantages are that the shell
does not need to load all of the filenames into memory, and
there is no globbing. When you have a lot of files in
the directory, those advantages are important (that's
why I use -f on ls; in a large directory ls itself can
take several tens of seconds to run and -f speeds that
up appreciably. In such cases 'for file in /var/*'
will likely fail with a glob error.)
You can go without the loop:
find /path/to/dir -type f -exec /your/first/command \{\} \; -exec /your/second/command \{\} \;
HTH