I want to copy executable files from one directory to another.
The source directory includes all sorts of files I don't care about (build artifacts). I want to grab only the executable files, using a bash script that works on both OS X and Linux.
By executable, I mean a file that has the executable permission, and would pass test -x $filename.
I know I can write some python script but then I would be introducing a dependency on python (just to copy files!) which is something I really want to avoid.
Note: I've seem a couple of similar questions but the answers seem to only work on Linux (as the questions specifically asked about Linux). Please do not mark this as duplicate unless the duplicate question is indeed about cross platform copying of executable files only.
Your own answer is conceptually elegant, but slow, because it creates at least one child process for every input file (test), plus an additional one for each matching file (cp).
Here's a more efficient bash alternative that:
builds up an array of matching input files in shell code,
and then copies them using a single invocation of cp.
#!/usr/bin/env bash
exeFiles=()
for f in "$src_dir"/*; do [[ -x $f && -f $f ]] && exeFiles+=( "$f" ); done
cp "${exeFiles[#]}" "$dest_dir/"
exeFiles=() initializes the array in which to store the matching filenames.
for f in "$src_dir"/* loops over all files and directories located directly in $scr_dir; note how * must be unquoted for globbing (filename expansion) to occur.
[[ -x $f && -f $f ]] determines whether the item at hand is executable (-x) and a (regular) file -f; note that double-quoting variable references inside [[ ... ]] is (mostly) optional.
exeFiles+=( "$f" ) appends a new element to the array
"${exeFiles[#]}" refers to the resulting array as a whole and robustly expands to the array elements as individual arguments - see Bash arrays.
After some experimentation, this seems to work on both OS X and Ubuntu
find "$src_dir" -maxdepth 1 -type f -exec test -x {} \; -exec cp {} "$dest_dir/" \;
Note that the -maxdepth 1 is specific to my use case where I don't care about recursively going through all the directories.
-type f is necessary because directories also count as executables
I pass two -exec flags. The exec flag not only executes the command but also counts as a filter so that if the command returns a non-zero exit code, the file is filtered out.
The way to use exec is write out whatever command you want, use {} to supply the current file, and terminate with \;
The first -exec returns a success exit code only if the file is executable.
The second -exec performs the copy, but it's not executed if the first -exec fails.
Related
I have the following script, which I normally use when I get a bunch of files that need to be renamed to the directory name which contains them.
The problem now is I need to rename the file to the directory two levels up. How can I get the grandparent directory to make this work?
With the following I get errors like this example:
"mv: cannot move ./48711/zoom/zoom.jpg to ./48711/zoom/./48711/zoom.jpg: No such file or directory". This is running on CentOS 5.6.
I want the final file to be named: 48711.jpg
#!/bin/bash
function dirnametofilename() {
for f in $*; do
bn=$(basename "$f")
ext="${bn##*.}"
filepath=$(dirname "$f")
dirname=$(basename "$filepath")
mv "$f" "$filepath/$dirname.$ext"
done
}
export -f dirnametofilename
find . -name "*.jpg" -exec bash -c 'dirnametofilename "{}"' \;
find .
Another method could be to use
(cd ../../; pwd)
If this were executed in any top-level paths such as /, /usr/, or /usr/share/, you would get a valid directory of /, but when you get one level deeper, you would start seeing results: /usr/share/man/ would return /usr, /my/super/deep/path/is/awesome/ would return /my/super/deep/path, and so on.
You could store this in a variable as well:
GRANDDADDY="$(cd ../../; pwd)"
and then use it for the rest of your script.
Assuming filepath doesn't end in /, which it shouldn't if you use dirname, you can do
Parent = "${filepath%/*}"
Grandparent = "${filepath%/*/*}"
So do something like this
[[ "${filepath%/*/*}" == "" ]] && echo "Path isn't long enough" || echo "${filepath%/*/*}"
Also this likely won't work if you're using relative paths (like find .). In which case you will want to use
filepath=$(dirname "$f")
filepath=$(readlink -f "$filepath")
instead of
filepath=$(dirname "$f")
Also you're never stripping the extension, so there is no reason to get it from the file and then append it again.
Note:
* This answer solves the OP's specific problem, in whose context "grandparent directory" means: the parent directory of the directory containing a file (it is the grandparent path from the file's perspective).
* By contrast, given the question's generic title, other answers here focus (only) on getting a directory's grandparent directory; the succinct answer to the generic question is: grandParentDir=$(cd ../..; printf %s "$PWD") to get the full path, and grandParentDirName=$(cd ../..; basename -- "$PWD") to get the dir. name only.
Try the following:
find . -name '*.jpg' \
-execdir bash -c \
'old="$1"; new="$(cd ..; basename -- "$PWD").${old##*.}"; echo mv "$old" "$new"' - {} \;
Note: echo was prepended to mv to be safe - remove it to perform the actual renaming.
-execdir ..\; executes the specified command in the specific directory that contains a given matching file and expands {} to the filename of each.
bash -c is used to execute a small ad-hoc script:
$(cd ..; basename -- "$PWD") determines the parent directory name of the directory containing the file, which is the grandparent path from the file's perspective.
${old##*.} is a Bash parameter expansion that returns the input filename's suffix (extension).
Note how {} - the filename at hand - is passed as the 2nd argument to the command in order to bind to $1, because bash -c uses the 1st one to set $0 (which is set to dummy value _ here).
Note that each file is merely renamed, i.e., it stays in its original directory.
Caveat:
Each directory with a matching file should only contain 1 matching file, otherwise multiple files will be renamed to the same target name in sequence - effectively, only the last file renamed will survive.
Can't you use realpath ../../ or readlink -f ../../ ? See this, readlink(1), realpath(3), canonicalize_file_name(3), and realpath(1). You may want to install the realpath package on Debian or Ubuntu. Probably CentOS has an equivalent package. (readlink should always be available, it is in GNU coreutils)
Hi all I have some problems with my script. I've read that changing the current directory from within a script is a bit of an issue. Basically I am looking for a single php file with a project folder and any sub-folders in it. And I want to change the directory to where that folder is and perform a command for it. So far no luck.
function findPHP(){
declare -a FILES
FILES=$(find ./ -name \*.php)
for file in "${FILES[#]}"
do
DIR=`dirname file`
( cd $DIR && doSomethingInThisDir &(...))
done
Any help would be greatly appreciated.
You are trying to iterate over FILES as an array, but it only has one element. In order to make the result of your subshell into an array, you can:
FILES=($(find ./ -name \*.php))
Note that it splits file names on spaces, so even though you properly quote below, it won't help. Alternatively, you could just let it split below (i.e. using your existing FILES) and use instead:
for file in $FILES
If you are using bash 4, you may want to have a look at recursive globbing... this would make it a bit easier:
for file in **/*.php
Note that you have to have the globstar shell option set, which you could enable with shopt -s globstar. This way is simpler and won't break on whitespace.
Also, you probably want $file here:
DIR=`dirname $file`
Or just use parameter expansion:
DIR=${file%/*}
There is no reason to use an array, or store the file list in anyway. If your find supports -execdir (eg gnufind 4.2.27), then use it. Otherwise, cd in a subshell as you have done:
#!/bin/bash
doSomethingInThisDir() ( cd $(dirname $1); ... )
export -f doSomethingInThisDir
find . -type f -exec bash -c 'doSomethingInThisDir {}' \;
I have defined the function using () instead of {}, but that is not necessary in this case. Normally, using () causes the function to run in a subshell, but that happens here anyway because find runs a separate process for each file.
I would like to know whether rm can remove all files within a directory (but not the subfolders or files within the subfolders)?
I know some people use:
rm -f /direcname/*.*
but this assumes the filename has an extension which not all do (I want all files - with or without an extension to be removed).
Although find allows you to delete files using -exec rm {} \; you can use
find /direcname -maxdepth 1 -type f -delete
and it is faster. Using -delete implies the -depth option, which means process directory contents before directory.
find /direcname -maxdepth 1 -type f -exec rm {} \;
Explanation:
find searches for files and directories within /direcname
-maxdepth restricts it to looking for files and directories that are direct children of /direcname
-type f restricts the search to files
-exec rm {} \; runs the command rm {} for each file (after substituting the file's path in place of {}).
I would like to know whether rm can remove all files within a directory (but not the subfolders or files within the subfolders)?
That's easy:
$ rm folder/*
Without the -r, the rm command won't touch sub-directories or the files they contain. This will only remove the files in folder and not the sub-directories or their files.
You will see errors telling you that folder/foo is a directory can cannot be removed, but that's actually okay with you. If you want to eliminate these messages, just redirect STDERR:
$ rm folder/* 2> /dev/null
By the way, the exit status of the rm command may not be zero, so you can't check rm for errors. If that's important, you'll have to loop:
$ for file in *
> do
> [[ -f $file ]] && rm $file
> [[ $? -ne 0 ]] && echo "Error in removing file '$file'"
> done
This should work in BASH even if the file names have spaces in them.
You can use
find /direcname -maxdepth 1 -type f -exec rm -f {} \;
A shell solution (without the non-standard find -maxdepth) would be
for file in .* *; do
test -f "$file" && rm "$file"
done
Some shells, notably zsh and perhaps bash version 4 (but not version 3), have a syntax to do that.
With zsh you might just type
rm /dir/path/*(.)
and if you would want to remove any file whose name starts with foo, recursively in subdirectories, you could do
rm /dir/path/**/foo*(.)
the double star feature is (with IMHO better interactive completion) in my opinion enough to switch to zsh for interactive shells. YMMV
The dot in parenthesis suffix indicates that you want only files (not symlinks or directories) to be expanded by the zsh shell.
Unix isn't DOS. There is no special "extension" field in a file name. Any characters after a dot are just part of the name and are called the suffix. There can be more than one suffix, for example.tar.gz. The shell glob character * matches across the . character; it is oblivious to suffixes. So the MS-DOS *.* is just * In Unix.
Almost. * does not match files which start with a .. Objects named with a leading dot are, by convention, "hidden". They do not show up in ls either unless you specify -a.
(This means that the . and .. directory entries for "self" and "parent" are considered hidden.)
To match hidden entries also, use .*
The rm command does not remove directories (when not operated recursively with -r).
Try rm <directory> and see. Even if the directory is empty, it will refuse.
So, the way you remove all (non-hidden) files, pipes, devices, sockets and symbolic links from a directory (but leave the subdirectories alone) is in fact:
rm /path/to/directory/*
to also remove the hidden ones which start with .:
rm /path/to/directory/{*,.*}
This syntax is brace expansion. Brace expansion is not pattern matching; it is just a short-hand for generating multiple arguments, in this case:
rm /path/to/directory/* /path/to/directory/.*
this expansion takes place first first and then globbing takes place to generate the names to be deleted.
Note that various solutions posted here have various issues:
find /path/to/directory -type f -delete
# -delete is not Unix standard; GNU find extension
# without -maxdepth 1 this will recurse over all files
# -maxdepth is also a GNU extension
# -type f finds only files; so this neglects to delete symlinks, fifos, etc.
The GNU find solutions have the benefit that they work even if the number of directory entries to be deleted is huge: too large to pass in a single call to rm. Another benefit is that the built-in -delete does not have issues with passing funny path names to an external command.
The portable workaround for the problem of too many directory entries is to list the entries with ls and pipe to xargs:
( cd /path/to/directory ; ls -a | xargs rm -- )
The parentheses mean "do these commands in a sub-process"; this way the effect of the cd is forgotten, which is useful in scripting. ls -a includes the hidden files.
We now include a -- after rm which means "this is the last option; everything else is a non-option argument". This guards us against directory entries whose names are indistinguishable from options. What if a file is called -rf and ends up the first argument? Then you have rm -rf ... which will blow off subdirectories.
The easiest way to do this is to use:
rm *
In order to remove directories, you must specify the option -r
rm -r
so your directories and anything contained in them will not be removed by using
rm *
per the man page for rm, its purpose is to remove files, which is why this works
Let's say you have a first.sh file in a directory: "/home/userbob/scripts/foo/". Basically I would like to know how to loop through specific directories, each time going back up to a higher level directory and repeating.
The .sh file has something like this pseudocode:
#!/bin/bash
curdi={$PATH} #where the first.sh file sits on the server
FOLDERS="$curdi/waffles/inner/
$curdi/pancakes/inner/
$curdi/bagels/inner/"
for f in $FOLDERS
do
cd $f
cp innerofinner/* .
cd $curdi
done
The idea is to somehow copy all the contents of /home/userbob/scripts/foo/waffles/inner/innerofinner to /home/userbob/scripts/foo/waffles/inner/
(and basically repeating just with the path having pancakes, bagels.etc.)
Can't do it for all directories (*) under /home/userbob/scripts/foo/ because there are some that I don't want to copy.
This should do it:
for name in waffles pancakes bagels
do
cp "$curdi/$name/inner/innferofinner/"* "$curdi/waffles/inner"
done
Walking file trees? Sounds like a job for find!
#!/usr/local/bin/env bash
# only environment variables should be all-caps
dirs=({bagels,pancakes}/inner)
find "${dirs[#]}" -type d -maxdepth 1 -mindepth 1 -name innerofinner -execdir bash -c 'cp "$1"/* .' -- {} \;
I did a partial path and assumed a working directory of /home/userbob/scripts/foo. An absolute path would work, too, and would look like
dirs=(/home/userbob/scripts/foo/{bagels,pancakes}/inner)
This finds all directories exactly one level below the listed directory that are named "innerofinner" and, in their parent directories, executs bash and a simple cp script.
If you're wondering how this works, read below.
The dirs=() syntax creates an empty array named dirs. dirs+(a b) creates an array with a at index 0 and b at index 1. Any whitespace-delimited string will work, here. In a shell script {a,b,c} expands to a b c but A{a,b,c}B expands to AaB AbB AcB. So specifying {bagels,pancakes}/inner is just a way to say both bagels/inner and pancakes/inner without having to type as much.
A variable in bash can be expanded with $foo or with ${foo}; these are the same. An array in shell can be expanded to all of its elements with ${foo[#]} delimited by spaces (if you know perl or php this will make some sense) and quoting the expansion (always a good idea in shell!) prevents spaces innside the variable from being processed again by the shell. Thus, "${dir[#]}" becomes bagels/inner pancakes/inner.
Knowing this we see that the find command has become find bagels/inner pancakes/inner -maxdepth 1 -mindepth 1 -type d -name innerofinner and if you execute this it will return exactly two lines: both full paths to each innerofinner directory. All we want now is to do something for each one, which -execdir does nicely.
Use a recursive function or invoke the script recursively.
I am not sure if I understand your problem statement correctly. Your psuedo code seems good. But, I see a problem with the following line.
curdi={$PWD}
It does not give you the directory where the script resides but gives the directory you are in. If your script directory is in the path and you are running the script from your home directory then $curdi would point to your home directory and not the directory where your script resides. This will lead to undesired results.
Incidentally, if you really wanted to do it in the way that your pseudo-script attempts it, you'd do it like this
#!/usr/bin/env bash
for f in "$PWD"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
# if you know for sure that it's one level up
cd ..
done
Presuming that $PWD is a good enough indicator of "current" directory for you. Me, I'd pass it in to the script.
#!/usr/bin/env bash
base="${1-$PWD}"
for f in "$base"/{waffles,pancakes,bagels}/inner ; do
cd "$f"
cp innerofinner/* .
cd ..
done
at call it like
breakfast.sh /home/userbob/scripts/foo/
find . \( -iname '*waffles*innferofinner*' -o \
-iname '*pancakes*innferofinner*' -o \
-iname '*baggels*innferofinner*' \) \
-type f \
-exec cp {} "`echo {} | sed 's:\(.*\)/[^/]\+/[^/]\+:\1:'` \;
Should do. Finds every file in the desired subdirs, then copies it based on its name.
HTH
I need to traverse a directory so starting in one directory and going deeper into difference sub directories. However I also need to be able to have access to each individual file to modify the file. Is there already a command to do this or will I have to write a script? Could someone provide some code to help me with this task? Thanks.
The find command is just the tool for that. Its -exec flag or -print0 in combination with xargs -0 allows fine-grained control over what to do with each file.
Example: Replace all foo's by bar's in all files in /tmp and subdirectories.
find /tmp -type f -exec sed -i -e 's/foo/bar/' '{}' ';'
for i in `find` ; do
if [ -d $i ] ; then do something with a directory ; fi
if [ -f $i ] ; then do something with a file etc. ; fi
done
This will return the whole tree (recursively) in the current directory in a list that the loop will go through.
This can be easily achieved by mixing find, xargs, sed (or other file modification command).
For example:
$ find /path/to/base/dir -type f -name '*.properties' | xargs sed -ie '/^#/d'
This will filter all files with file extension .properties.
The xargs command will feed the file path generated by find command into the sed command.
The sed command will delete all lines start with # in the files (feed by xargs).
Command combination in this way is very flexible.
For example, find command have different parameters so you can filter by user name, file size, file path (eg: under /test/ subfolder), file modification time.
Another dimension of flexibility is how and what to change in your file. For ex, sed command allows you to make changes on file in applying substitution (specify via regular expressions). Similarly, you can use gzip to compress the file. And so on ...
You would usually use the find command. On Linux, you have the GNU version, of course. It has many extra (and useful) options. Both will allow you to execute a command (eg a shell script) on the files as they are found.
The exact details of how to make changes to the file depend on the change you want to make to the file. That is probably best scripted, with find running the script:
POSIX or GNU:
find . -type f -exec your_script '{}' +
This will run your script once for a group of files with those names provided as arguments. If you want to do it one file at a time, replace the + with ';' (or \;).
I am assuming SearchMe is the example directory name you need to traverse completely.
I am also assuming, since it was not specified, the files you want to modify are all text file. Is this correct?
In such scenario I would suggest using the command:
find SearchMe -type f -exec vi {} \;
If you are not familiar with vi editor, just use another one (nano, emacs, kate, kwrite, gedit, etc.) and it should work as well.
Bash 4+
shopt -s globstar
for file in **
do
if [ -f "$file" ];then
# do some processing to your file here
# where the find command can't do conveniently
fi
done