How to see if the process was killed? - linux

When you want to set a time limit for a process, you can simply use timeout before the process:
timeout 1.5s COMMAND
This will kill the COMMAND if it was not done after 1.5 seconds.
I used that command in some bash scripts; How can i know if one process was completely done before the time limit, or it was killed (because of exceeding the time limit)?

The Gnu timeout command normally returns a status code of 124 if the timeout was exceeded. Otherwise, it returns the status code returned by the command itself. So you can test the status code by grabbing the value of $? immediately after executing timeout:
timeout 1.5s COMMAND
status=$?
if ((status==124)); then
# command timed out
elif (status!=0)); then
# command terminated in time, but it returned an error status
else
# command terminated in time and reported success
fi
If your command might return the status code 124, then you would have to use the --preserve-status option and check to see if the command was terminated by the signal you tell timeout to send. See the man timeout for details.

Add && echo >> time_limit.txt after COMMAND:
timeout 1.5s COMMAND && echo >> time_limit.txt
So, if you want to see if the COMMAND was killed, check the existence of file time_limit.txt. If that file exists, it means the command was NOT killed. Otherwise, the command was killed.
In bash script, you can check the existence of that file as follow:
if [[ -r time_limit.txt ]]then;
{
echo "The command was NOT killed"
}
else
{
echo "The command was killed"
}

Related

timeout in shell script and report those input with timeout

I would like to conduct analysis using program Arlsumstat_64bit with thousand of input files.
Arlsumstat_64bit reads input files (.arp) and write result file (sumstat.out).
Each input will append new line on the result file (sumstat.out) based on the argument "0 1"
Therefore, I wrote a shell script to execute all the input (*.arp) in the same folder.
However, if the input files contain error, the shell script will be stuck without any subsequently process. Therefore, I found a command with "timeout" to deal my issue.
I made a shell script as following
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
rm -r ${sp}.res
echo "process done: $sp"
done
However, I still need to know which input files failed.
How could make a list telling me which input files are "timeout"?
See the man page for the timeout command http://man7.org/linux/man-pages/man1/timeout.1.html
If the command times out, and --preserve-status is not set, then exit
with status 124. Otherwise, exit with the status of COMMAND. If no
signal is specified, send the TERM signal upon timeout. The TERM
signal kills any process that does not block or catch that signal.
It may be necessary to use the KILL (9) signal, since this signal
cannot be caught, in which case the exit status is 128+9 rather than
124.
You should find out which exit codes are possible for the program arlsumstat_64bit. I assume it should exit with status 0 on success. Otherwise the script below will not work. If you need to distinguish between timeout and other errors it should not use exit status 124 or which is used by timeout to indicate a timeout. So you can check the exit status of your command to distinguish between success, error or timeout as necessary.
To keep the script simple I assume you don't need to distingish between timeout and other errors.
I added some comments where I modified your script to improve it or to show alternatives.
#!/bin/bash
# don't parse the output of ls
for sp in *.arp
do
echo "process start: $sp"
# instead of using "if timeout 10 arlsumstat_64bit ..." you could also run
# timeout 10 arlsumstat_64bit... and check the value of `$?` afterwards,
# e.g. if you want to distinguish between error and timeout.
# $sp will already contain .arp so ${sp}.arp is wrong
# use quotes in case a file name contains spaces
if timeout 10 arlsumstat_64bit "${sp}" sumstat.out 1 0
then
echo "process done: $sp"
else
echo "processing failed or timeout: $sp"
fi
# If the result for foo.arp is foo.res, the .arp must be removed
# If it is foo.arp.res, rm -r "${sp}.res" would be correct
# use quotes
rm -r "${sp%.arp}.res"
done
Below code should work for you:
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
if [ $? -eq 0 ]
then
echo "process done sucessfully: $sp"
else
echo "process failed: $sp"
fi
echo "Deleting ${sp}.res"
rm -r ${sp}.res
done

The Linux timeout command and exit codes

In a Linux shell script I would like to use the timeout command to end another command if some time limit is reached. In general:
timeout -s SIGTERM 100 command
But I also want that my shell script exits when the command is failing for some reason. If the command is failing early enough, the time limit will not be reached, and timeout will exit with exit code 0. Thus the error cannot be trapped with trap or set -e, as least I have tried it and it did not work. How can I achieve what I want to do?
Your situation isn't very clear because you haven't included your code in the post.
timeout does exit with the exit code of the command if it finishes before the timeout value.
For example:
timeout 5 ls -l non_existent_file
# outputs ERROR: ls: cannot access non_existent_file: No such file or directory
echo $?
# outputs 2 (which is the exit code of ls)
From man timeout:
If the command times out, and --preserve-status is not set, then
exit with status 124. Otherwise, exit with the status of COMMAND. If
no signal is specified, send the TERM signal upon timeout. The TERM
signal kills any process that does not block or catch that signal.
It may be necessary to use the KILL (9) signal, since this signal
cannot be caught, in which case the exit status is 128+9 rather than
124.
See BashFAQ105 to understand the pitfalls of set -e.

Write a bash script to check if process is responding in x seconds?

How can I write a script to check if a process is taking a number of seconds to respond, and if over that number kill it?
I've tried the timeout command, but the problem is it is a source dedicated sever, and when i edit it's bash script:
HL=./srcds_linux
echo "Using default binary: $HL"
and change it to timeout 25 ./srcds_linux and run it as root it won't run the server:
ERROR: Source Engine binary '' not found, exiting
So assuming that I can't edit the servers bash script, is there a way to create a script that can check if any program, not executed w/ the script is timing out in x seconds?
It sounds like the problem is that you're changing the script wrong.
If you're looking at this script, the logic basically goes:
HL=./srcds_linux
if ! test -f "$HL"
then
echo "Command not found"
fi
$HL
It sounds like you're trying to set HL="timeout 25 ./srcds_linux". This will cause the file check to fail.
The somewhat more correct way is to change the invocation, not the file to invoke:
HL=./srcds_linux
if ! test -f "$HL
then
echo "Command not found"
fi
timeout 25 $HL
timeout kills the program if it takes too long, though. It doesn't care whether the program is responding to anything, just that it takes longer than 25 seconds doing it.
If the program appears to hang, you could e.g. check whether it stops outputting data for 25 seconds:
your_command_to_start_your_server | while read -t 25 foo; do echo "$foo"; done
echo "The command hasn't said anything for 25 seconds, killing it!"
pkill ./srcds_linux

Why does timeout not work within a bash script?

I tried to kill a process if it exceeds more than a few seconds.
The following works just fine when I run it in the terminal.
timeout 2 sleep 5
But when I have a script -
#!/bin/bash
timeout 2 sleep 5
it says
timeout: command not found
Why so? What is the workaround?
--EDIT--
On executing type timeout, it says -
timeout is a shell function
It's seems your environment $PATH variable does not include /usr/bin/ path or may be timeout binary exists in somewhere else.
So just check path of timeout command using :
command -v timeout
and use absolute path in your script
Ex.
#!/bin/bash
/usr/bin/timeout 2 sleep 5
Update 1#
As per your update in question, it is function created in your shell. you can use absolute path in your script as mentioned in above example.
Update 2#
timeout command added from coreutils version => 8.12.197-032bb, If GNU timeout is not available you can use expect (Mac OS X, BSD, ... do not usually have GNU tools and utilities by default).
################################################################################
# Executes command with a timeout
# Params:
# $1 timeout in seconds
# $2 command
# Returns 1 if timed out 0 otherwise
timeout() {
time=$1
# start the command in a subshell to avoid problem with pipes
# (spawn accepts one command)
command="/bin/sh -c \"$2\""
expect -c "set echo \"-noecho\"; set timeout $time; spawn -noecho $command; expect timeout { exit 1 } eof { exit 0 }"
if [ $? = 1 ] ; then
echo "Timeout after ${time} seconds"
fi
}
Example:
timeout 10 "ls ${HOME}"
Source

how to re-run the "curl" command automatically when the error occurs

Sometimes when I execute a bash script with the curl command to upload some files to my ftp server, it will return some error like:
56 response reading failed
and I have to find the wrong line and re-run them manually and it will be OK.
I'm wondering if that could be re-run automatically when the error occurs.
My scripts is like this:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do curl -T $files ftp.myserver.com --user ID:pw ;
done
But sometimes A,B,C, would be uploaded successfully, only D were left with an "error 56", so I have to rerun curl command manually. Besides, as Will Bickford said, I prefer that no confirmation will be required, because I'm always asleep at the time the script is running. :)
Here's a bash snippet I use to perform exponential back-off:
# Retries a command a configurable number of times with backoff.
#
# The retry count is given by ATTEMPTS (default 5), the initial backoff
# timeout is given by TIMEOUT in seconds (default 1.)
#
# Successive backoffs double the timeout.
function with_backoff {
local max_attempts=${ATTEMPTS-5}
local timeout=${TIMEOUT-1}
local attempt=1
local exitCode=0
while (( $attempt < $max_attempts ))
do
if "$#"
then
return 0
else
exitCode=$?
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 2 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($#)" 1>&2
fi
return $exitCode
}
Then use it in conjunction with any command that properly sets a failing exit code:
with_backoff curl 'http://monkeyfeathers.example.com/'
Perhaps this will help. It will try the command, and if it fails, it will tell you and pause, giving you a chance to fix run-my-script.
COMMAND=./run-my-script.sh
until $COMMAND; do
read -p "command failed, fix and hit enter to try again."
done
I have faced a similar problem where I need to make contact with servers using curl that are in the process of starting up and haven't started up yet, or services that are temporarily unavailable for whatever reason. The scripting was getting out of hand, so I made a dedicated retry tool that will retry a command until it succeeds:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do retry curl -f -T $files ftp.myserver.com --user ID:pw ;
done
The curl command has the -f option, which returns code 22 if the curl fails for whatever reason.
The retry tool will by default run the curl command over and over forever until the command returns status zero, backing off for 10 seconds between retries. In addition retry will read from stdin once and once only, and writes to stdout once and once only, and writes all stdout to stderr if the command fails.
Retry is available from here: https://github.com/minfrin/retry

Resources