Bash test and execute if directory pattern exists - linux

How do you do an inline test for the existence of a directory pattern?
If a directory pattern exists, then I want to chmod that pattern.
e.g. I'm trying to do the following:
[ -d /usr/local/myproject/*/bin ] && chmod +x /usr/local/myproject/*/bin/*
but this gives me the error "-bash: [: too many arguments".

there's no need to test:
chmod +x /usr/local/myproject/*/bin/* 2>/dev/null

It doesn't work because -d test takes one argument. You seem to be passing more than one. A fix would be:
for dir in /usr/local/myproject/*/bin; do
[[ -d $dir ]] && chmod +x $dir/*
done

To salvage some usefulness out of my answer, just suppose you had too many bin directories that you couldn't do it yi_H's way.
find /usr/local/myproject/ -path '/usr/local/myproject/*/bin' -maxdepth 2 -type d -exec chmod a+x {} + 2>/dev/null

Related

Optimizing bash script (for loops, permissions, ...)

I made this script (very quickly ... :)) ages ago and use it very often, but now I'm interested how bash experts would optimize it.
What it does is it goes through files and directories in the current directory and sets the correct permissions:
#!/bin/bash
echo "Repairing chowns."
for item in "$#"; do
sudo chown -R ziga:ziga "$item"
done
echo "Setting chmods of directories to 755."
for item in $#; do
sudo find "$item" -type d -exec chmod 755 {} \;
done
echo "Setting chmods of files to 644."
for item in $#; do
sudo find "$item" -type f -exec chmod 644 {} \;
done
echo "Setting chmods of scripts to 744."
for item in $#; do
sudo find "$item" -type f -name "*.sh" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.pl" -exec chmod 744 {} \;
sudo find "$item" -type f -name "*.py" -exec chmod 744 {} \;
done
What I'd like to do is
Reduce the number of for-loops
Reduce the number of find statements (I know the last three can be combined into one, but I wonder if it can be reduced even further)
Make script accept parameters other than the current directory and possibly accept multiple parameters (right now it only works if I cd into a desired directory and then call bash /home/ziga/scripts/repairPermissions.sh .). NOTE: the parameters might have spaces in the path.
a) you are looping through $# in all cases, you only need a single loop.
a1) But find can do this for you, you don't need a bash loop.
a2) And chown can take multiple directories as arguments.
b) Per chw21, remove the sudo for files you own.
c) One exec per found file/directory is expensive, use xargs.
d) Per chw21, combine the last three finds into one.
#!/bin/bash
echo "Repairing permissions."
sudo chown -R ziga:ziga "$#"
find "$#" -type d -print0 | xargs -0 --no-run-if-empty chmod 755
find "$#" -type f -print0 | xargs -0 --no-run-if-empty chmod 644
find "$#" -type f \
\( -name '*.sh' -o -name '*.pl' -o -name '*.py' \) \
-print0 | xargs -0 --no-run-if-empty chmod 744
This is 11 execs (sudo, chown, 3 * find/xargs/chmod) of other processes (if the argument list is very long, xargs will exec chmod multiple times).
This does however read the directory tree four times. The OS's filesystem caching should help though.
Edit: Explanation for why xargs is used in answer to chepner's comment:
Imagine that there is a folder with 100 files in it.
a find . -type f -exec chmod 644 {} \; will execute chmod 100 times.
Using find . -type f -print0 | xargs -0 chmod 644 execute xargs once and chmod once (or more if the argument list is very long).
This is three processes started compared to 101 processes started. The resources and time (and energy) needed to execute three processes is far less.
Edit 2:
Added --no-run-if-empty as an option to xargs. Note that this may not be portable to all systems.
I assume that you are ziga. This means that after the first chown command in there, you own every file and directory, and I don't see why you'd need any sudo after that.
You can combine the three last finds quite easily:
find "$item" -type f \( -name "*.sh" -o -name "*.py" -o -name "*.pl" \) -exec chmod 744 {} \;
Apart from that, I wonder what kind of broken permissions you tend to find. For example, chmod does know the +X modifier which only sets the x if at least one of user, group, or other already has a x.
You can simplify this:
for item in "$#"; do
To this:
for item; do
That's right, the default values for a for loop are taken from "$#".
This won't work well if some of the directories contain spaces:
for item in $#; do
Again, replace with for item; do. Same for all the other loops.
As the other answer pointed out, if you are running this script as ziga, then you can drop all the sudo except in the first loop.
You may use the idiom for item instead of for item in "$#".
We may reduce the use of find to just once (with a double loop).
I am assuming that ziga is your "$USER" name.
Bash 4.4 is required for the readarray with the -d option.
#!/bin/bash
user=$USER
for i
do
# Repairing chowns.
sudo chown -R "$user:$user" "$i"
readarray -d '' -t line< <(sudo find "$i" -print0)
for f in "${line[#]}"; do
# Setting chmods of directories to 755.
[[ -d $f ]] && sudo chmod 755 "$f"
# Setting chmods of files to 644.
[[ -f $f ]] && sudo chmod 644 "$f"
# Setting chmods of scripts to 744.
[[ $f == *.#(sh|pl|py) ]] && sudo chmod 744 "$f"
done
done
If you have an older bash (2.04+), change the readarray line to this:
while IFS='' read -d '' line; do arr+=("$line"); done < <(sudo find "$i" -print0)
I am keeping the sudo assuming the "items" in "$#" migth be repeated inside searched directories. If there is no repetition, sudo may be omited.
There are two inevitable external commands (sudo chown) per loop.
Plus only one find per loop.
The other two external commands (sudo chmod) are reduced to only those needed to efect the change you need.
But the shell is allways very very slow to do its job.
So, the gain in speed depends on the type of files where the script is used.
Do some testing with time script to find out.
Subject to a few constraints, listed below, I would expect something similar to the following to be faster than anything mentioned thus far. I also use only Bourne shell constructs:
#!/bin/sh
set -e
per_file() {
chown ziga:ziga "$1"
test -d "$1" && chmod 755 "$1" || { test -f "$1" && chmod 644 "$1"; }
file "$1" | awk -F: '$2 !~ /script/ {exit 1}' && chmod 744 "$1"
}
if [ "$process_my_file" ]
then
per_file("$1")
exit
fi
export process_my_file=$0
find "$#" -exec $0 {} +
The script calls find(1) on the command-line arguments, and invokes itself for each file found. It detects re-invocation by testing for the existence of an environment variable named process_my_file. Although there is a cost to invoking the shell each time, it should be dwarfed by not traversing the filesystem.
Notes:
set -e to exit on first error
+ in find to get parallel execution
file(1) to detect a script
chown(1) not invoked recursively
symbolic links not followed
Calling file(1) for every file to determine if it's a script is definitely more expensive than just testing the name, but it's also more general. Note that awk is testing only the description text, not the filename. It would misfire if the filename contained a colon.
You could lift chown back up out of the per_file() function, and use its -R option. That might be faster, as it's one execution. It would be an interesting experiment.
In terms of your requirements, there are no loops, and only one call to find(1). You also mentioned,
right now it only works if I cd into a desired directory
but that's not true. It also works if you mention the target directory on the command line,
$ fix_tree_perms my/favorite/directory
You could add, say, a -d option to change there first, but I don't see how that would be easier to use.

Find with variable directory (but no -name) throws find: paths must precede expression: find

I'm trying to set up a shortcut function on my server for fixing permissions on new site folders (maybe this is bad practice, but I still want to solve the following problem:)
function webmod { chown -R rafe:www-data $1; find '$1' -type d -exec chmod 775 '{}' \; find '$1' -type f -exec chmod 664 '{}' \; chmod g+s -R $1; }
When I use webmod directory/name/here it just throws
find: paths must precede expression: find
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
What am I doing wrong?
Thanks in advance!
Add a semicolon between \; find. Otherwise, the first find gets the second as arguments :-)
Single quoted strings are not interpolated by the shell, instead of:
find '$1' ...
try:
find "$1" ...
Also, don't forget to quote the parameters to chown/chmod:
chown -R rafe:www-data "$1"
I guess best would be to have a script called in find exec, rather having complex looking find statements. If so the answer is already given by "perreal"
Or you can have a script as
script1.sh
if [ -d "$1" ] ; then
chmod 775 $1;
elif [ -f "$1" ]; then
chmod 664 $1
fi
fi
And find will now look like:
find $1 -exec ./script1.sh {} \; chmod g+s -R $1
This way in case you want to do more on those files, you will be able to do by just extending it in your script.
But for this solution to be applicable you must have luxury to have a script in your env.

Bash recursively execute a command on each directory

I have a directory with many subdirectories inside, i want to execute a command on each of those subdirectories.
What i want to do is run 'svn up'
this is what i have tried so far
find . -type d -maxdepth 1 -exec svn "up '{}'" \;
and
for dir in * do cd $dir; svn up; cd ..;
None of them works so far (I have tried many things without luck)
You just need a trailing slash on the glob:
for d in */; do # only match directories
( cd "$d" && svn up ) # Use a subshell to avoid having to cd back to the root each time.
done
This works for me - the -d checks for a directory:
for f in *; do if [ -d "$f" ]; then cd "$f"; echo "$f"; cd ..; fi; done
echo "$f" can be substituted for whatever command you wish to run from inside each directory.
Note that this, and the trailing / solution, both match symbolic links, as well as files. If you want to avoid this behaviour (only enter real directories), you can do this:
for f in *; do if [ -d "$f" -a ! -L "$f" ]; then cd "$f"; echo "$f"; cd ..; fi done
This seems to work:
find . -type d -maxdepth 1 -exec svn up "{}" \;
But it tried to update the current directory, which should be ommited. (althought it works for me because current dir is not a svn directory)

Using a while/for loop with the 'find' command to copy files and directories

I have the follwoing problem: I want to make a script that backups a certain directory completely to another directory. I may not use cp -r or any other recursive command. So I was thinking of using a while or for loop. The directory that needs to be back upped is given with a parameter. This is what I have so far:
OIFS="$IFS"
IFS=$'\n'
for file in `find $1`
do
cp $file $HOME/TestDirectory
done
IFS="$OIFS"
But when I execute it, this is what my terminal says: Script started, file is typescript
Try this:
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
Don't run your script through script !
Add this (shebang) at top of file:
#!/bin/bash
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
change permission of your script for adding executable flag:
chmod +x myScript
run your script localy whith arguments:
./myScript rootDirectoryWhereSearchForFiles

Find file then cd to that directory in Linux

In a shell script how would I find a file by a particular name and then navigate to that directory to do further operations on it?
From here I am going to copy the file across to another directory (but I can do that already just adding it in for context.)
You can use something like:
cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
This will locate the first ls regular file then change to that directory.
In terms of what each bit does:
The find will start at / and search down, listing out all regular files (-type f) called ls (-name ls). There are other things you can add to find to further restrict the files you get.
The | head -1 will filter out all but the first line.
$() is a way to take the output of a command and put it on the command line for another command.
dirname can take a full file specification and give you the path bit.
cd just changes to that directory, the -- is used to prevent treating a directory name beginning with a hyphen from being treated as an option to cd.
If you execute each bit in sequence, you can see what happens:
pax[/home/pax]> find / -type f -name ls
/usr/bin/ls
pax[/home/pax]> find / -type f -name ls | head -1
/usr/bin/ls
pax[/home/pax]> dirname "$(find / -type f -name ls | head -1)"
/usr/bin
pax[/home/pax]> cd -- "$(dirname "$(find / -type f -name ls | head -1)")"
pax[/usr/bin]> _
The following should be more safe:
cd -- "$(find / -name ls -type f -printf '%h' -quit)"
Advantages:
The double dash prevents the interpretation of a directory name starting with a hyphen as an option (find doesn't produce such file names, but it's not harmful and might be required for similar constructs)
-name check before -type check because the latter sometimes requires a stat
No dirname required because the %h specifier already prints the directory name
-quit to stop the search after the first file found, thus no head required which would cause the script to fail on directory names containing newlines
no one suggesting locate (which is much quicker for huge trees) ?
zsh:
cd $(locate zoo.txt|head -1)(:h)
cd ${$(locate zoo.txt)[1]:h}
cd ${$(locate -r "/zoo.txt$")[1]:h}
or could be slow
cd **/zoo.txt(:h)
bash:
cd $(dirname $(locate -l1 -r "/zoo.txt$"))
Based on this answer to a similar question, other useful choice could be having 2 commands, 1st to find the file and 2nd to navigate to its directory:
find ./ -name "champions.txt"
cd "$(dirname "$(!!)")"
Where !! is history expansion meaning 'the previous command'.
Expanding on answers already given, if you'd like to navigate iteratively to every file that find locates and perform operations in each directory:
for i in $(find /path/to/search/root -name filename -type f)
do (
cd $(dirname $(realpath $i));
your_commands;
)
done
if you are just finding the file and then moving it elsewhere, just use find and -exec
find /path -type f -iname "mytext.txt" -exec mv "{}" /destination +;
function fReturnFilepathOfContainingDirectory {
#fReturnFilepathOfContainingDirectory_2012.0709.18:19
#$1=File
local vlFl
local vlGwkdvlFl
local vlItrtn
local vlPrdct
vlFl=$1
vlGwkdvlFl=`echo $vlFl | gawk -F/ '{ $NF="" ; print $0 }'`
for vlItrtn in `echo $vlGwkdvlFl` ;do
vlPrdct=`echo $vlPrdct'/'$vlItrtn`
done
echo $vlPrdct
}
Simply this way, isn't this elegant?
cdf yourfile.py
Of course you need to set it up first, but you need to do this only once:
Add following line into your .bashrc or .zshrc, whatever you use as your shell initialization script.
source ~/bin/cdf.sh
And add this code into ~/bin/cdf.sh file that you need to create from scratch.
#!/bin/bash
function cdf() {
THEFILE=$1
echo "cd into directory of ${THEFILE}"
# For Mac, replace find with mdfind to get it a lot faster. And it does not need args ". -name" part.
THEDIR=$(find . -name ${THEFILE} |head -1 |grep -Eo "/[ /._A-Za-z0-9\-]+/")
cd ${THEDIR}
}
If it's a program in your PATH, you can do:
cd "$(dirname "$(which ls)")"
or in Bash:
cd "$(dirname "$(type -P ls)")"
which uses one less external executable.
This uses no externals:
dest=$(type -P ls); cd "${dest%/*}"
If your file is only in one location you could try the following:
cd "$(find ~/ -name [filename] -exec dirname {} \;)" && ...
You can use -exec to invoke dirname with the path that find returns (which goes where the {} placeholder is). That will change directories. You can also add double ampersands ( && ) to execute the next command after the shell has changed directory.
For example:
cd "$(find ~/ -name need_to_find_this.rb -exec dirname {} \;)" && ruby need_to_find_this.rb
It will look for that ruby file, change to the directory, then run it from within that folder. This example assumes the filename is unique and that for some reason the ruby script has to run from within its directory. If the filename is not unique you'll get many locations passed to cd, it will return an error then it won't change directories.
try this. i created this for my own use.
cd ~
touch mycd
sudo chmod +x mycd
nano mycd
cd $( ./mycd search_directory target_directory )"
if [ $1 == '--help' ]
then
echo -e "usage: cd \$( ./mycd \$1 \$2 )"
echo -e "usage: cd \$( ./mycd search_directory target_directory )"
else
find "$1"/ -name "$2" -type d -exec echo {} \; -quit
fi
cd -- "$(sudo find / -type d -iname "dir name goes here" 2>/dev/null)"
keep all quotes (all this does is just send you to the directory you want, after that you can just put commands after that)

Resources