Why does timeout not work within a bash script? - linux

I tried to kill a process if it exceeds more than a few seconds.
The following works just fine when I run it in the terminal.
timeout 2 sleep 5
But when I have a script -
#!/bin/bash
timeout 2 sleep 5
it says
timeout: command not found
Why so? What is the workaround?
--EDIT--
On executing type timeout, it says -
timeout is a shell function

It's seems your environment $PATH variable does not include /usr/bin/ path or may be timeout binary exists in somewhere else.
So just check path of timeout command using :
command -v timeout
and use absolute path in your script
Ex.
#!/bin/bash
/usr/bin/timeout 2 sleep 5
Update 1#
As per your update in question, it is function created in your shell. you can use absolute path in your script as mentioned in above example.
Update 2#
timeout command added from coreutils version => 8.12.197-032bb, If GNU timeout is not available you can use expect (Mac OS X, BSD, ... do not usually have GNU tools and utilities by default).
################################################################################
# Executes command with a timeout
# Params:
# $1 timeout in seconds
# $2 command
# Returns 1 if timed out 0 otherwise
timeout() {
time=$1
# start the command in a subshell to avoid problem with pipes
# (spawn accepts one command)
command="/bin/sh -c \"$2\""
expect -c "set echo \"-noecho\"; set timeout $time; spawn -noecho $command; expect timeout { exit 1 } eof { exit 0 }"
if [ $? = 1 ] ; then
echo "Timeout after ${time} seconds"
fi
}
Example:
timeout 10 "ls ${HOME}"
Source

Related

timeout in shell script and report those input with timeout

I would like to conduct analysis using program Arlsumstat_64bit with thousand of input files.
Arlsumstat_64bit reads input files (.arp) and write result file (sumstat.out).
Each input will append new line on the result file (sumstat.out) based on the argument "0 1"
Therefore, I wrote a shell script to execute all the input (*.arp) in the same folder.
However, if the input files contain error, the shell script will be stuck without any subsequently process. Therefore, I found a command with "timeout" to deal my issue.
I made a shell script as following
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
rm -r ${sp}.res
echo "process done: $sp"
done
However, I still need to know which input files failed.
How could make a list telling me which input files are "timeout"?
See the man page for the timeout command http://man7.org/linux/man-pages/man1/timeout.1.html
If the command times out, and --preserve-status is not set, then exit
with status 124. Otherwise, exit with the status of COMMAND. If no
signal is specified, send the TERM signal upon timeout. The TERM
signal kills any process that does not block or catch that signal.
It may be necessary to use the KILL (9) signal, since this signal
cannot be caught, in which case the exit status is 128+9 rather than
124.
You should find out which exit codes are possible for the program arlsumstat_64bit. I assume it should exit with status 0 on success. Otherwise the script below will not work. If you need to distinguish between timeout and other errors it should not use exit status 124 or which is used by timeout to indicate a timeout. So you can check the exit status of your command to distinguish between success, error or timeout as necessary.
To keep the script simple I assume you don't need to distingish between timeout and other errors.
I added some comments where I modified your script to improve it or to show alternatives.
#!/bin/bash
# don't parse the output of ls
for sp in *.arp
do
echo "process start: $sp"
# instead of using "if timeout 10 arlsumstat_64bit ..." you could also run
# timeout 10 arlsumstat_64bit... and check the value of `$?` afterwards,
# e.g. if you want to distinguish between error and timeout.
# $sp will already contain .arp so ${sp}.arp is wrong
# use quotes in case a file name contains spaces
if timeout 10 arlsumstat_64bit "${sp}" sumstat.out 1 0
then
echo "process done: $sp"
else
echo "processing failed or timeout: $sp"
fi
# If the result for foo.arp is foo.res, the .arp must be removed
# If it is foo.arp.res, rm -r "${sp}.res" would be correct
# use quotes
rm -r "${sp%.arp}.res"
done
Below code should work for you:
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
if [ $? -eq 0 ]
then
echo "process done sucessfully: $sp"
else
echo "process failed: $sp"
fi
echo "Deleting ${sp}.res"
rm -r ${sp}.res
done

How to see if the process was killed?

When you want to set a time limit for a process, you can simply use timeout before the process:
timeout 1.5s COMMAND
This will kill the COMMAND if it was not done after 1.5 seconds.
I used that command in some bash scripts; How can i know if one process was completely done before the time limit, or it was killed (because of exceeding the time limit)?
The Gnu timeout command normally returns a status code of 124 if the timeout was exceeded. Otherwise, it returns the status code returned by the command itself. So you can test the status code by grabbing the value of $? immediately after executing timeout:
timeout 1.5s COMMAND
status=$?
if ((status==124)); then
# command timed out
elif (status!=0)); then
# command terminated in time, but it returned an error status
else
# command terminated in time and reported success
fi
If your command might return the status code 124, then you would have to use the --preserve-status option and check to see if the command was terminated by the signal you tell timeout to send. See the man timeout for details.
Add && echo >> time_limit.txt after COMMAND:
timeout 1.5s COMMAND && echo >> time_limit.txt
So, if you want to see if the COMMAND was killed, check the existence of file time_limit.txt. If that file exists, it means the command was NOT killed. Otherwise, the command was killed.
In bash script, you can check the existence of that file as follow:
if [[ -r time_limit.txt ]]then;
{
echo "The command was NOT killed"
}
else
{
echo "The command was killed"
}

Is it possible to set time out from bash script? [duplicate]

This question already has answers here:
How do I limit the running time of a BASH script
(5 answers)
Closed 7 years ago.
Sometimes my bash scripts are hanging and hold without clear reason
So they actually can hang for ever ( script process will run until I kill it )
Is it possible to combine in the bash script time out mechanism in order to exit from the program after for example ½ hour?
This Bash-only approach encapsulates all the timeout code inside your script by running a function as a background job to enforce the timeout:
#!/bin/bash
Timeout=1800 # 30 minutes
function timeout_monitor() {
sleep "$Timeout"
kill "$1"
}
# start the timeout monitor in
# background and pass the PID:
timeout_monitor "$$" &
Timeout_monitor_pid=$!
# <your script here>
# kill timeout monitor when terminating:
kill "$Timeout_monitor_pid"
Note that the function will be executed in a separate process. Therefore the PID of the monitored process ($$) must be passed. I left out the usual parameter checking for the sake of brevity.
If you have Gnu coreutils, you can use the timeout command:
timeout 1800s ./myscript
To check if the timeout occurred check the status code:
timeout 1800s ./myscript
if (($? == 124)); then
echo "./myscript timed out after 30 minutes" >>/path/to/logfile
exit 124
fi

how to re-run the "curl" command automatically when the error occurs

Sometimes when I execute a bash script with the curl command to upload some files to my ftp server, it will return some error like:
56 response reading failed
and I have to find the wrong line and re-run them manually and it will be OK.
I'm wondering if that could be re-run automatically when the error occurs.
My scripts is like this:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do curl -T $files ftp.myserver.com --user ID:pw ;
done
But sometimes A,B,C, would be uploaded successfully, only D were left with an "error 56", so I have to rerun curl command manually. Besides, as Will Bickford said, I prefer that no confirmation will be required, because I'm always asleep at the time the script is running. :)
Here's a bash snippet I use to perform exponential back-off:
# Retries a command a configurable number of times with backoff.
#
# The retry count is given by ATTEMPTS (default 5), the initial backoff
# timeout is given by TIMEOUT in seconds (default 1.)
#
# Successive backoffs double the timeout.
function with_backoff {
local max_attempts=${ATTEMPTS-5}
local timeout=${TIMEOUT-1}
local attempt=1
local exitCode=0
while (( $attempt < $max_attempts ))
do
if "$#"
then
return 0
else
exitCode=$?
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 2 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($#)" 1>&2
fi
return $exitCode
}
Then use it in conjunction with any command that properly sets a failing exit code:
with_backoff curl 'http://monkeyfeathers.example.com/'
Perhaps this will help. It will try the command, and if it fails, it will tell you and pause, giving you a chance to fix run-my-script.
COMMAND=./run-my-script.sh
until $COMMAND; do
read -p "command failed, fix and hit enter to try again."
done
I have faced a similar problem where I need to make contact with servers using curl that are in the process of starting up and haven't started up yet, or services that are temporarily unavailable for whatever reason. The scripting was getting out of hand, so I made a dedicated retry tool that will retry a command until it succeeds:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do retry curl -f -T $files ftp.myserver.com --user ID:pw ;
done
The curl command has the -f option, which returns code 22 if the curl fails for whatever reason.
The retry tool will by default run the curl command over and over forever until the command returns status zero, backing off for 10 seconds between retries. In addition retry will read from stdin once and once only, and writes to stdout once and once only, and writes all stdout to stderr if the command fails.
Retry is available from here: https://github.com/minfrin/retry

Scons command with time limit

I want to limit the execution time of a program I am running under Linux. I put in my scons script a line like:
Command("com​","",​"ulimit -t 1; myprogram")
and tested it with an infinite loop program: it did not work and the program ran forever.
Am I missing something?
-- tsf
ulimit -t 1 means that the limit is set to 1 second of CPU time. If your infinite loop program uses any sort of sleep in its inner loop then it will use practically no CPU time. This means it will not get killed in 1 second of real, on the clock time. In fact it may take minutes or hours to use up its 1 second allocation.
What happens if you run the command outside of SCons? Perhaps you don't have permission to change the limit at all...
ulimit -t 1; ./myprogram
For example, it may say the following if the limit is already set to 0:
bash: ulimit: cpu time: cannot modify limit: Operation not permitted
Edit: it seems that the -t option is broken on Ubuntu 9.04. A fix has been committed 05 June 2009, but it may take a while to trickle into the updates - it may not be fixed until 9.10.
As an historical note, this problem no longer exists in Ubuntu 10.04.
You can also use this script:
(taken from http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.mac.system/2005-12/msg00247.html)
#!/bin/sh
# timeout script
#
usage()
{
echo "usage: timeout seconds command args ..."
exit 1
}
[[ $# -lt 2 ]] && usage
seconds=$1; shift
timeout()
{
sleep $seconds
kill -9 $pid >/dev/null 2>/dev/null
}
eval "$#" &
pid=$!
timeout &
wait $pid
.

Resources