I did a good bit of searching and testing on my own, but I can't seem to find the best way to achieve this goal. I would like to have a bash one liner that will find a script on the machine, execute the script and have the ability to add switches or needed information in my case to execute the script successfully.
To get a little more specific, I am in Kali Linux and I run the locate command like so:
locate pattern_create
which returns:
/usr/share/metasploit-framework/tools/pattern_create.rb
So I thought about piping this into xargs to run the script like so:
locate pattern_create | xargs ruby
but of course I could not specify the options I need to that would run the script successfully, which would be:
ruby /usr/share/metasploit-framework/tools/pattern_create.rb 2700
I came up with a work around, but I feel that it's somewhat sloppy and this could be done easier, and that's where I hope I could get any input/feedback.
I found out I can run:
pattern_create=$(locate pattern_create) && ruby $pattern_create 2700
to get exactly what I need, but then I am dealing with environment variables which I would not want a bunch of when doing this often. I was hoping to figure this out with xargs or maybe and even cleaner way if possible. I know this can be done easily with find -exec, but that won't work in my case where I don't where the script is stored.
Any help would be awesome, I appreciate everyone's time. Thank you.
You can do:
ruby $(locate pattern_create)
But be aware that if there are multiple lines returned by locate, then this may not do what you wanted.
This is a dangerous thing to do as you do not know what locate will return and you could end up executing arbitrary scripts. I suggest that you use an intermediate script which will protect against the unexpected, such as finding no scripts or finding more than one.
#! /bin/sh
#
if [ $# -eq 0 ]
then
echo >&2 "Usage: $0 script arguments"
exit -1
fi
script=$(locate $1)
numfound=$(locate $1 | wc -l)
shift
if [ $numfound -eq 1 ]
then
# Only run the script if exactly one match is found
ruby $script $*
elif [ $numfound -eq 0 ]
then
echo "No matching scripts found" >&2
exit -1
else
echo "Too many scripts found - $script" >&2
exit -1
fi
Related
I have a folder of executable scripts, and some of them have Python shebangs, while others have Bash shebangs, etc. We have a cron job that runs this folder of scripts nightly, and the hope is that any error in any script will exit the job.
The scripts are run with something like: for FILE in $FILES; do ./$FILE; done
The scripts are provided by various people, and while the Python scripts always exit after an error, sometimes developers forget to add set -e in their Bash scripts.
I could have the for-loop use bash -e, but then I need to detect whether the current script is Bash/Python/etc.
I could set -e from the parent script, and then source scripts, but I still need to know which language each script is in, and I'd prefer them to run as subshells so script contributors don't have to worry about messing up the parent.
greping the shebangs is a short tweak, but knowing the flexibility of Bash, I'd be surprised if there weren't a way to "export" an option that affected all child scripts, in the same way you can export a variable. And, there have been many cases in general where I've forgotten "set -e", so it could be nice to know more options for fool-proofing things.
I see some options for inheriting -e for subshells involved in command substitution, but not in general.
Disclaimer: Never, ever do this! It's a huge disservice to everyone involved. You will introduce failures both in scripts with meticulous error handling, and in scripts without it.
Anyways, no one likes being told "don't do that" on StackOverflow, so my suggestion would be to identify scripts and invoke them with their shebang string plus -e:
for f in ./*
do
# Determine if the script is a shell script
if [[ $(file -i "$f") == *text/x-shellscript* ]]
then
# Read the first line
read -r shebang < "$f"
# The script shouldn't have been identified as a shell script without
# a shebang, but check anyways
if [[ $shebang != "#!"* ]]
then
echo "No idea what $f is" >&2
continue
fi
# Strip off the #! and run it with -e and the file
shebang=${shebang#??}
$shebang -e "$f"
else
# It's some other kind of executable, just run it directly
"$f"
fi
done
Here's a script with correct error handling that now stops working:
#!/bin/bash
my-service start
ret=$?
if [ $ret -eq 127 ]
then
# Use legacy invocation instead
start-my-service
ret=$?
fi
exit "$ret"
Here's a script without error handling that now stops working:
#!/bin/sh
err=$(grep "ERROR" file.log)
if [ -z "$err" ]
then
echo "Run was successful"
exit 0
else
echo "Run failed: $err"
exit 1
fi
Two questions about the same thing I think...
Question one:
Is it possible to have a bash script run with default parameters/options? ...in the sense if someone were to run the script:
./somescript.sh
it would actually run with ./somescript.sh | tee /tmp/build.txt?
Question two:
Would it also possible to prepend the script with defaults? For example, if you were to run the script ./somescript.sh
it would actually run
script -q -c "./somescript.sh" /tmp/build.txt | aha > /tmp/build.html?
Any help or guidance is very much appreciated.
You need a wrapper script that handles all such scenario for you.
For example, your wrapper script can take parameters that helps you decide.
./wrapper_script.sh --input /tmp/build.txt --ouput /tmp/build.html
By default, --input and --output can be set to values you want when they are empty.
You can use the builtin $# to know how many arguments you have and take action based on that. If you want to do your second part, for example, you could do something like
if [[ $# -eq 0 ]]; then
script -q -c "$0 /tmp/build.txt | aha /tmp/build.html
exit
fi
# do everything if you have at least one argument
Though this will have problems if your script/path have spaces, so you're probably better putting the real path to your script in the script command instead of $0
You can also use exec instead of running the command and exiting, but make sure you have your quotes in the right place:
if [[ $# -eq 0 ]]; then
exec script -q -c "$0 /tmp/build.txt | aha /tmp/build.html"
fi
# do everything when you have at least 1 argument
Hello im writing a bash code which has some positional parameters but whats the best approach to add an optional seconds parameter which will allow some function to run for x seconds?
This is what code looks like:
doaction()
{
(run a process)
}
while [ $# -gt -0 ]; do
case "$1" in
--action|-a)
doaction ;;
--seconds|-s)
???????? $2
shift ;;
esac
shift
done
After x seconds kill process.
Also what happens when i run the script like
./script -s 10 -a
instead of
./script -a -s 10
Thanks
It looks like the timeout command is probably useful here. However, this only works on a script separate from the one that is currently running (as far as I can tell).
For your second question, the way you currently have things written, if you use ./script -a -s 10 then the result would be that the action would run before the delay is set. You can fix this by using a flag to indicate that the action should be executed, and you can ensure that the timeout is set (if at all) before the execution.
Here is my suggestion for a possible solution:
while [ $# -gt -0 ]; do
case "$1" in
--action|-a)
action=true;;
--seconds|-s)
time="$2"
shift;;
esac;
shift
done
if $action; then
timeout $time /path/to/action.sh
else
# do something else
fi
Where /path/to/action.sh is the location of the script that you want to run for a specific amount of time. You can test that the script exits after the specified number of seconds by replacing the script with the bash command top or something else which runs indefinitely.
You can use "getopts" to solve your problem. You might find the information on this link to be useful for your scenario.
Is there a way to execute bash script when I click a program like NetBeans or DropBox on Ubuntu
and execute a bash script when exit it
My idea create bash script on cronjob #reboot check every second if the program exist in the current processes
#!/bin/bash
NameOfprogram="NetBeans"
while [[ true ]]; do
countOfprocess=$(ps -ef |grep $NameOfprogram | wc -l)
if [[ $countOfprocess -gt 1 ]]; then
#execute bash
fi
sleep 1
done
But I think this idea not the best ,Is there a better way to achieve it?
A better approach is to wrap the executable in a script. That means you put a script with the name of the program in your path (probably $HOME/bin) and Linux will use that instead of the real executable.
Now you can execute the real program using:
/usr/bin/NetBeans "$#"
So to execute the real executable, you just put the absolute path in front of the name. The odd "$#" too pass on any arguments someone might have given the script.
Put a loop around this:
while [[ true ]]; do
/usr/bin/NetBeans "$#"
done
But there is a problem: You can't exit this program anymore. As soon as you try, it restarts. So if you just want a restart when it crashes:
while [[ true ]]; do
/usr/bin/NetBeans "$#" && exit 0
done
As long as the program exits because of an error, it will be restarted. If you quit it, the script will stop.
I need some help. I'm having a script 'Script 1', which will call 'Script 2' to run in background which checks something periodically. But I want the Script 2 to get started only once, even Script 1 is called multiple times. Is there a way to do it?
It would be even more helpful, if someone suggests some commands to achieve this.
Thanks in advance
Sure, you can put something like this at the top of Script2:
if [[ -f /tmp/Script2HasRun ]] ; then
exit
fi
touch /tmp/Script2HasRun
That will stop Script2 from ever running again by using a sentinel file, unless the file is deleted of course, and it probably will be at some point since it's in /tmp.
So you probably want to put it somewhere else where it can be better protected.
If you don't want to stop it from ever running again, you need some mechanism to delete the sentinel file.
For example, if your intent is to only have one copy running at a time:
if [[ -f /tmp/Script2IsRunning ]] ; then
exit
fi
touch /tmp/Script2IsRunning
# Do whatever you have to do.
rm -f /tmp/Script2IsRunning
And keep in mind there's a race condition in there that could result in two copies running. There are ways to mitigate that as well by using the content as well as the existence, something like:
if [[ -f /tmp/Script2IsRunning ]] ; then
exit
fi
echo $$ >/tmp/Script2IsRunning
sleep 1
if [[ "$(cat /tmp/Script2IsRunning 2>/dev/null)" != $$ ]] ; then
exit
fi
# Do whatever you have to do.
rm -f /tmp/Script2IsRunning
There are more levels of protection beyond that but they become complex, and I usually find that suffices for most things.