I'm writing a shell script that looks like this:
for i in $ACTIONS_DIR/*
do
if [ -x $i ]; then
exec $i nap
fi
done
Now, what I'm trying to achieve is to list every file in $ACTIONS_DIR to be able to execute it. Each file under $ACTIONS_DIR is another shell script.
Now, the problem here is that after using exec the script stops and doesn't go to the next file in line. Any ideas why might this be?
exec replaces the shell process. Remove it if you only want to call the command as a subprocess instead.
exec transfers control of the PID over to the program you're exec'ing. This is mainly used in scripts whose sole purpose is to set up options to that program. Once the exec is hit, nothing below it in the script is executed.
Also, you should try some quoting techniques:
for i in "$ACTIONS_DIR"/*
do
if [ -x "$i" ]; then
"./$i" nap
fi
done
You might also look into using find for this operation:
find "$ACTIONS_DIR" \
-maxdepth 1 \
-type f \
-perm +0111 \
-exec {} nap \;
exec never returns to the caller. Just try
if [ -x "${i}" ]
then
"${i}" nap
fi
Related
I have many .sh scripts in a single folder and would like to run them one after another. A single script can be executed as:
bash wget-some_long_number.sh -H
Assume my directory is /dat/dat1/files
How can I run bash wget-some_long_number.sh -H one after another?
I understand something in these lines should work:
for i in *.sh;...do ....; done
Use this:
for f in *.sh; do
bash "$f"
done
If you want to stop the whole execution when a script fails:
for f in *.sh; do
bash "$f" || break # execute successfully or break
# Or more explicitly: if this execution fails, then stop the `for`:
# if ! bash "$f"; then break; fi
done
It you want to run, e.g., x1.sh, x2.sh, ..., x10.sh:
for i in `seq 1 10`; do
bash "x$i.sh"
done
To preserve exit code of failed script (responding to #VespaQQ):
#!/bin/bash
set -e
for f in *.sh; do
bash "$f"
done
There is a much simpler way, you can use the run-parts command which will execute all scripts in the folder:
run-parts /path/to/folder
I ran into this problem where I couldn't use loops and run-parts works with cron.
Answer:
foo () {
bash -H $1
#echo $1
#cat $1
}
cd /dat/dat1/files #change directory
export -f foo #export foo
parallel foo ::: *.sh #equivalent to putting a & in between each script
You use GNU parallel, this executes everything in the directory, with the added buff of it happening at a lot faster rate. Not to mention it isn't just with script execution, you could put any command in the function and it'll work.
I need to check if a given string in a bash script is a command.
In other words: I need to check if that String is a filename in the /bin directory (Only /bin).
I tried
echo "Write a bash command: "
read -r var2
if [[ -z (find /bin -name $var2) ]]
then echo "That's not a command" && exit 1
fi
But it didn't work.
Ideas?
EDIT: Solved. As amdixon suggested I changed (find /bin -name $var2) for $(find /bin -name $var2).
Thanks dude.
Depending on your actual requirements, it can be easier than that:
if ! [ -x /bin/"$var2" ]
then
echo "That's not a command" && exit 1
fi
[ is short for the test command and with the -x argument, it will return 0 (true) if the given file is executable by you. Note that this will exclude commands that are executable by other users only because you have insufficient permissions.
If you use the -f argument instead, [ will test for any file in the /bin directory, whether it is executable or not (of course usually all of them are):
if ! [ -f /bin/"$var2" ]
then
echo "That's not a command" && exit 1
fi
If you need to make sure that the file is executable (even if it may not be executable by you), see this question for a solution using file.
Type help test on the command line to read more about possible arguments for [.
echo "Write a bash command: "
read -r var2
if [ ! -f /bin/"$var2" ]
then echo "That's not a command" && exit 1
fi
is the natural way to do this in bash. [ expr ] is the shorthand for the builtin test command. Type man builtins and scroll until you find test for a complete description of what testcan do for you. For instance, if you prefer testing simultaneously if the file exists and is executable, you can replace:
[ ! -f /bin/"$var2" ]
by:
[ ! -x /bin/"$var2" ]
I need to check if a given string in a bash script is a command.
vs
I need to check if that String is a filename in the /bin directory (Only /bin).
This is by no means the same. I guess you refer to well-known "shell commands" and in this case, There are three reasons you might be on the wrong path:
The root hierarchy is for everything needed to boot up the system. This might include what you consider "commands", but other stuff as well (like e.g. lvm or cryptsetup)
For the same reason, binaries you would consider "commands" might be missing from /bin, living in /usr/bin instead.
Shells have "builtin" commands and there is no guarantee you will find them as separate binaries at all.
Given all that, if you still want to look for executables in /bin, there is really no reason to use find at all. test (or the abbreviated version just writing brackets) will be enough, like if [ -x /bin/$var2 ]; then ...
I wrote a zsh function to help me do some grepping at my job.
function rgrep (){
if [ -n "$1" ] && [ -n "$2" ]
then
exec grep -rnw $1 -r $2
elif [ -n "$1" ]
then
exec grep -rnw $1 -r "./"
else
echo "please enter one or two args"
fi
}
Works great, however, grep finishes executing I don't get thrown back into the shell. it just hangs at [process complete] any ideas?
I have the function in my .zshrc
In addition to getting rid of the unnecessary exec, you can remove the if statement as well.
function rgrep (){
grep -rwn "${1:?please enter one or two args}" -r "${2:-./}"
}
If $1 is not set (or null valued), an error will be raised and the given message displayed. If $2 is not set, a default value of ./ will be used in its place.
Do not use exec as it replace the existing shell.
exec [-cl] [-a name] [command [arguments]]
If command is supplied, it replaces the shell without creating a new process. If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is what the login program does. The -c option causes command to be executed with an empty environment. If -a is supplied, the shell passes name as the zeroth argument to command. If no command is specified, redirections may be used to affect the current shell environment. If there are no redirection errors, the return status is zero; otherwise the return status is non-zero.
Try this instead:
rgrep ()
{
if [ -n "$1" ] && [ -n "$2" ]
then
grep -rnw "$1" -r "$2"
elif [ -n "$1" ]
then
grep -rnw "$1" -r "./"
else
echo "please enter one or two args"
fi
}
As a completely different approach, I like to build command shortcuts like this as minimal shell scripts, rather than functions (or aliases):
% echo 'grep -rwn "$#"' >rgrep
% chmod +x rgrep
% ./rgrep
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
%
(This relies on a traditional behavior of Unix: executable text files without #! lines are considered shell scripts and are executed by /bin/sh. If that doesn't work on your system, or you need to run specifically under zsh, use an appropriate #! line.)
One of the main benefits of this approach is that shell scripts in a directory in your PATH are full citizens of the environment, not local to the current shell like functions and aliases. This means they can be used in situations where only executable files are viable commands, such as xargs, sudo, or remote invocation via ssh.
This doesn't provide the ability to give default arguments (or not easily, anyway), but IMAO the benefits outweigh the drawbacks. (And in the specific case of defaulting grep to search PWD recursively, the real solution is to install ack.)
How can I list the path of the output of this script?
This is my command:
(ls -d */ ); echo -n $i; ls -R $i | grep "wp-config.php" ;
This is my current output:
/wp-config.php
It seems you want find the path to a file called "wp-config.php".
Does the following help?
find $PWD -name 'wp-config.php'
Your script is kind of confusing: Why does ls -d */ does not show any output? What's the value of $i? Your problem in fact seems to be that ls -R lists the contents of all subdirectories but doesn't give you full paths for their contents.
Well, find is the best tool for that, but you can simulate it in this case via a script like this:
#!/bin/bash
searchFor=wp-config.php
startDir=${1:-.}
lsSubDir() {
local actDir="$1"
for entry in $(ls "$actDir"); do
if [ -d "$actDir/$entry" ]; then
lsSubDir "$actDir/$entry"
else
[ $entry = $searchFor ] && echo "$actDir/$entry"
fi
done
}
lsSubDir $startDir
Save it in a file like findSimulator, make it executable and call it with the directory where to start searching as parameter.
Be warned: this script is not very efficient, it may stop on large subdirectory-trees because of recursion. I would strongly recommend the solution using find.
I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.