Add seconds option to positional parameters in bash - linux

Hello im writing a bash code which has some positional parameters but whats the best approach to add an optional seconds parameter which will allow some function to run for x seconds?
This is what code looks like:
doaction()
{
(run a process)
}
while [ $# -gt -0 ]; do
case "$1" in
--action|-a)
doaction ;;
--seconds|-s)
???????? $2
shift ;;
esac
shift
done
After x seconds kill process.
Also what happens when i run the script like
./script -s 10 -a
instead of
./script -a -s 10
Thanks

It looks like the timeout command is probably useful here. However, this only works on a script separate from the one that is currently running (as far as I can tell).
For your second question, the way you currently have things written, if you use ./script -a -s 10 then the result would be that the action would run before the delay is set. You can fix this by using a flag to indicate that the action should be executed, and you can ensure that the timeout is set (if at all) before the execution.
Here is my suggestion for a possible solution:
while [ $# -gt -0 ]; do
case "$1" in
--action|-a)
action=true;;
--seconds|-s)
time="$2"
shift;;
esac;
shift
done
if $action; then
timeout $time /path/to/action.sh
else
# do something else
fi
Where /path/to/action.sh is the location of the script that you want to run for a specific amount of time. You can test that the script exits after the specified number of seconds by replacing the script with the bash command top or something else which runs indefinitely.

You can use "getopts" to solve your problem. You might find the information on this link to be useful for your scenario.

Related

Run bash script with defaults to piped commands set within the script

Two questions about the same thing I think...
Question one:
Is it possible to have a bash script run with default parameters/options? ...in the sense if someone were to run the script:
./somescript.sh
it would actually run with ./somescript.sh | tee /tmp/build.txt?
Question two:
Would it also possible to prepend the script with defaults? For example, if you were to run the script ./somescript.sh
it would actually run
script -q -c "./somescript.sh" /tmp/build.txt | aha > /tmp/build.html?
Any help or guidance is very much appreciated.
You need a wrapper script that handles all such scenario for you.
For example, your wrapper script can take parameters that helps you decide.
./wrapper_script.sh --input /tmp/build.txt --ouput /tmp/build.html
By default, --input and --output can be set to values you want when they are empty.
You can use the builtin $# to know how many arguments you have and take action based on that. If you want to do your second part, for example, you could do something like
if [[ $# -eq 0 ]]; then
script -q -c "$0 /tmp/build.txt | aha /tmp/build.html
exit
fi
# do everything if you have at least one argument
Though this will have problems if your script/path have spaces, so you're probably better putting the real path to your script in the script command instead of $0
You can also use exec instead of running the command and exiting, but make sure you have your quotes in the right place:
if [[ $# -eq 0 ]]; then
exec script -q -c "$0 /tmp/build.txt | aha /tmp/build.html"
fi
# do everything when you have at least 1 argument

How to run bash script when a program open in liunx

Is there a way to execute bash script when I click a program like NetBeans or DropBox on Ubuntu
and execute a bash script when exit it
My idea create bash script on cronjob #reboot check every second if the program exist in the current processes
#!/bin/bash
NameOfprogram="NetBeans"
while [[ true ]]; do
countOfprocess=$(ps -ef |grep $NameOfprogram | wc -l)
if [[ $countOfprocess -gt 1 ]]; then
#execute bash
fi
sleep 1
done
But I think this idea not the best ,Is there a better way to achieve it?
A better approach is to wrap the executable in a script. That means you put a script with the name of the program in your path (probably $HOME/bin) and Linux will use that instead of the real executable.
Now you can execute the real program using:
/usr/bin/NetBeans "$#"
So to execute the real executable, you just put the absolute path in front of the name. The odd "$#" too pass on any arguments someone might have given the script.
Put a loop around this:
while [[ true ]]; do
/usr/bin/NetBeans "$#"
done
But there is a problem: You can't exit this program anymore. As soon as you try, it restarts. So if you just want a restart when it crashes:
while [[ true ]]; do
/usr/bin/NetBeans "$#" && exit 0
done
As long as the program exits because of an error, it will be restarted. If you quit it, the script will stop.

check if file exist and wait for a sh script to run until the file is found

I'm working on a linux server which is sometimes very slow. So when i add some jobs to run for me i have to wait for a few hours just to run a simple calculation.
I was wondering if i am able to start the next analysis but let it wait until the output of the previous analysis is there. (the second analysis needs the first analysis output)
I tried to make except and other options working but still no success (found except and other options in previous question on stackoverflow):
expect {
'output/analysis_file1.txt'
}
Any ideas/hints are appreciated and will help me allot.
The only thing i want is to let the second scrip wait till the text file of the first script is given.
The 4 scripts:1.
#!/bin/bash
#$ -cwd
./script1.sh
. ./script2.sh $repla
. ./script3.sh $replac
2:
repla=''
for i in 'abcdefghijklmnopqrst'
do
repla=`echo $i | sed 's/'abc'/'xyz'/g'`
#echo $repla
done
3:
replac=''
for j in $1
do
replac=`echo $j | sed 's/'xyz'/'san'/g'`
#echo $replac
done
4:
replace=''
for h in $1
do
replace=`echo $h | sed 's/'san'/'sander'/g'`
#echo $replace
done
you can use below core with some modifications
#!/bin/bash
while [ ! -f FILE_NAME ]
do
sleep SOME_SECONDS
done
echo "file found"
You can use wait if you know the pid of the process running in background. Wait will also return the same exit code of the process it is waiting to stop.
firstProcess & # Running in background
firstPid=$!
otherProcess # Concurrent with firstProcess
wait $firstPid # Wait firstProcess finish
anotherProcess
Instead of executing multiple scripts independently you should create a master script runner like this:
#!/bin/bash
# sanity checks & parse arguments
./script1
ret=$?
# check for return value of script1 using $ret variable
./script2
ret=$?
# check for return value of script2 using $ret variable
./script3
ret=$?
# check for return value of script3 using $ret variable
...
# do cleanup and reporting

Running a command only once from a script, even script is executed multiple times

I need some help. I'm having a script 'Script 1', which will call 'Script 2' to run in background which checks something periodically. But I want the Script 2 to get started only once, even Script 1 is called multiple times. Is there a way to do it?
It would be even more helpful, if someone suggests some commands to achieve this.
Thanks in advance
Sure, you can put something like this at the top of Script2:
if [[ -f /tmp/Script2HasRun ]] ; then
exit
fi
touch /tmp/Script2HasRun
That will stop Script2 from ever running again by using a sentinel file, unless the file is deleted of course, and it probably will be at some point since it's in /tmp.
So you probably want to put it somewhere else where it can be better protected.
If you don't want to stop it from ever running again, you need some mechanism to delete the sentinel file.
For example, if your intent is to only have one copy running at a time:
if [[ -f /tmp/Script2IsRunning ]] ; then
exit
fi
touch /tmp/Script2IsRunning
# Do whatever you have to do.
rm -f /tmp/Script2IsRunning
And keep in mind there's a race condition in there that could result in two copies running. There are ways to mitigate that as well by using the content as well as the existence, something like:
if [[ -f /tmp/Script2IsRunning ]] ; then
exit
fi
echo $$ >/tmp/Script2IsRunning
sleep 1
if [[ "$(cat /tmp/Script2IsRunning 2>/dev/null)" != $$ ]] ; then
exit
fi
# Do whatever you have to do.
rm -f /tmp/Script2IsRunning
There are more levels of protection beyond that but they become complex, and I usually find that suffices for most things.

Linux Single Instance Kill if running too long

I am using the following to keep a single instance of a script running on my server. I have a cronjob to run this every minute.
How do I daemonize an arbitrary script in unix?
#!/bin/bash
if [[ $# < 1 ]]; then
echo "Name of pid file not given."
exit
fi
# Get the pid file's name.
PIDFILE=$1
shift
if [[ $# < 1 ]]; then
echo "No command given."
exit
fi
echo "Checking pid in file $PIDFILE."
#Check to see if process running.
PID=$(cat $PIDFILE 2>/dev/null)
if [[ $? = 0 ]]; then
ps -p $PID >/dev/null 2>&1
if [[ $? = 0 ]]; then
echo "Command $1 already running."
exit
fi
fi
# Write our pid to file.
echo $$ >$PIDFILE
# Get command.
COMMAND=$1
shift
# Run command
$COMMAND "$*"
Now I found out that my script had hung for some reason and therefore it was stuck. I'd like a way to check if the $PIDFILE is "old" and if so, kill the process. I know that's possible (check the timestamp on the file) but I don't know the syntax or if this is even a good idea. Also, when this script is running, the CPU should be pretty heavily used. If it hangs (rare but it happened at least once so far), the CPU usage drops to 0%. It would be nice if I could check that the process is really hung/not active, but I don't know if there's an easy way to do that (and I don't want to have many false positives where it gets killed but it's running fine).
To answer the question in your title, which seems quite different from your problem, use timeout.
Now, for your problem, I don't see where it could hang, unless you gave it a fifo queue for the pid file. Now, to run and respawn, you can just run this script once, on startup:
#!/bin/bash
while /bin/true; do
"$#"
wait
done
Which brings up another bug in the code you got from the other question: "$*" will pass all the arguments to the script as a single argument; without the quotes it'll split arguments with white space. "$#" will pass them individually and handling white space properly.
Call with /path/to/script command [argument]....

Resources