Is it possible to set time out from bash script? [duplicate] - linux

This question already has answers here:
How do I limit the running time of a BASH script
(5 answers)
Closed 7 years ago.
Sometimes my bash scripts are hanging and hold without clear reason
So they actually can hang for ever ( script process will run until I kill it )
Is it possible to combine in the bash script time out mechanism in order to exit from the program after for example ½ hour?

This Bash-only approach encapsulates all the timeout code inside your script by running a function as a background job to enforce the timeout:
#!/bin/bash
Timeout=1800 # 30 minutes
function timeout_monitor() {
sleep "$Timeout"
kill "$1"
}
# start the timeout monitor in
# background and pass the PID:
timeout_monitor "$$" &
Timeout_monitor_pid=$!
# <your script here>
# kill timeout monitor when terminating:
kill "$Timeout_monitor_pid"
Note that the function will be executed in a separate process. Therefore the PID of the monitored process ($$) must be passed. I left out the usual parameter checking for the sake of brevity.

If you have Gnu coreutils, you can use the timeout command:
timeout 1800s ./myscript
To check if the timeout occurred check the status code:
timeout 1800s ./myscript
if (($? == 124)); then
echo "./myscript timed out after 30 minutes" >>/path/to/logfile
exit 124
fi

Related

shell script put result of curl in variable followed by sleep command [duplicate]

This question already has answers here:
Assign variable in the background shell
(2 answers)
Closed 1 year ago.
I want to trigger curl requests every 400ms in shell script and put the results in a variable, and after finishing the curl request (eg 10 requests) finally write all results in a file. when I use the following code for this purpose
result="$(curl --location --request GET 'http://localhost:8087/say-hello')" & sleep 0.400;
Because & creates a new process result can not achieve. so what should I do?
You can use the -m curl option instead of the sleep.
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. See
also the --connect-timeout option.
The difference can be sound in the next sequence of commands:
a=1; a=$(echo 2) ; sleep 1; echo $a
2
and with a background process
a=1; a=$(echo 2) & sleep 1; echo $a
[1] 973
[1]+ Done a=$(echo 2)
1
Why is a not changed in the second case?
Actually it is changed... in a new environment. The & creates a new process with its own a, and that a is assigned the value 2. When the process is finished, the variable a of that subprocess is deleted and you only of the original value of a.
Depending on your requirements you might want to make a resultdir, have every background curl process write to a different tmpfile, wait with wait until all curls are finished and collect your results.

bash - close script by error or by timeout [duplicate]

This question already has answers here:
Timeout a command in bash without unnecessary delay
(24 answers)
Closed 2 years ago.
On stackoverflow there are many solutions - how to close script by timeout or close script if there is an error.
But how to have both approaches together?
If during execution of the script there is an error - close script.
If timeout is out - close script.
I have following code:
#!/usr/bin/env bash
set -e
finish_time=$1
echo "finish_time=" ${finish_time}
(./execute_something.sh) & pid=$!
sleep ${finish_time}
kill $pid
But if there is an error while execution - script still waits, when timeout would be out.
First, I won't use set -e.
You'll explicitly wait on the job you want; the exit status of wait will be the exit status of the job itself.
echo "finish_time = $1"
./execute_something.sh & pid=$!
sleep "$1" & sleep_pid=$!
wait -n # Waits for either the sleep or the script to finish
rv=$?
if kill -0 $pid; then
# Script still running, kill it
# and exit
kill -s ALRM $pid
wait $pid # exit status will indicte it was killed by SIGALRM
exit
else
# Script exited before sleep
kill $sleep_pid
exit $rv
fi
There is a slight race condition here; it goes as follows:
wait -n returns after sleep exits, indicating the script will exit on its own
The script exits before we can check if it is still running
As a result, we assume it actually exited before sleep.
But that just means we'll create a script that ran slightly over the threshold as finishing on time. That's probably not a distinction you care about.
Ideally, wait would set some shell parameter that indicates which process caused it to return.

Delaying not preventing Bash function from simultaneous execution

I need to block the simultaneous calling of highCpuFunction function. I have tried to create a blocking mechanism, but it is not working. How can I do this?
nameOftheScript="$(basename $0)"
pidOftheScript="$$"
highCpuFunction()
{
# Function with code causing high CPU usage. Like tar, zip, etc.
while [ -f /tmp/"$nameOftheScript"* ];
do
sleep 5;
done
touch /tmp/"$nameOftheScript"_"$pidOftheScript"
echo "$(date +%s) I am a Bad function you do not want to call me simultaniously..."
# Real high CPU usage code for reaching the database and
# parsing logs. It takes the heck out of the CPU.
rm -rf /tmp/"$nameOftheScript"_"$pidOftheScript" 2>/dev/null
}
while true
do
sleep 2
highCpuFunction
done
# The rest of the code...
In short, I want to run highCpuFunction at least with a gap of 5 seconds. Regardless of the instance/user/terminal. I need to allow other users to run this function but in proper sequence and with a gap of at least 5 seconds.
Use the flock tool. Consider this code (let's call it 'onlyoneofme.sh'):
#!/bin/sh
exec 9>/var/lock/myexclusivelock
flock 9
echo start
sleep 10
echo stop
It will open file /var/lock/myexclusivelock as descriptor 9 and then try to lock it exclusively. Only one instance of the script will be allowed to pass behind the flock 9 command. The rest of them will wait for the other script to finish (so the descriptor will be closed and the lock freed). After this, the next script will acquire the lock and execute, and so on.
In the following solution the # rest of the script part can be executed only by one process. The test and set is atomic, and there isn't any race condition, whereas test -f file .. touch file, two processes can touch the file.
try_acquire_lock() {
local lock_file=$1
# Noclobber option to fail if the file already exists
# in a sub-shell to avoid modifying current shell options
( set -o noclobber; : >"$lock_file")
}
# Trap to remove the file when the process exits
trap 'rm "$lock_file"' EXIT
lock_file=/tmp/"$nameOftheScript"_"$pidOftheScript"
while ! try_acquire_lock "$lock_file";
do
echo "failed to acquire lock, sleeping 5sec.."
sleep 5;
done
# The rest of the script
It's not optimal, because the wait is done in a loop with sleep. To improve, one can use inter process communication (FIFO), or operating system notifications or signals.
# Block current shell process
kill -STOP $BASHPID
# Unblock blocked shell process (where <pid> is the id of the blocked process)
kill -CONT <pid>

Shell infinite loop to execute at specific time [duplicate]

This question already has answers here:
Sleep until a specific time/date
(22 answers)
Closed 7 years ago.
I have access to a Linux CentOS box. (I can't use crontab sadly)
When I need to run a task everyday I have just created a infinite loop with a sleep. (it runs, sleeps ~24 hours and then runs again)
#!/bin/sh
while :
do
/home/sas_api_emailer.sh |& tee first_sas_api
sleep 1438m
done
Recently I have a task that I need to run at a specific time everyday 6:00 am (I can't use crontab)
How can I create an infinite loop that will only execute # 6:00 am?
Check the time in the loop, and then sleep for a minute if it's not the time you want.
while :
do
if [ $(date '+%H%M') = '0600' ]
then /home/sas_api_emailer.sh |& tee first_sas_api
fi
sleep 60
done
You have (at least!) three choices:
cron
This is hands-down the best choice. Unfortunately, you say it's not an option for you. Drag :(
at
at and batch read commands from standard input or a specified file
which are to be executed at a later time.
For example: at -f myjob noon
Here is more information about at: http://www.thegeekstuff.com/2010/06/at-atq-atrm-batch-command-examples/
Write a "polling" or "while loop" script. For example:
while true
# Compute wait time
sleep wait_time
# do something
done
Here are some good ideas for "compute wait time": Bash: Sleep until a specific time/date

Write a bash script to check if process is responding in x seconds?

How can I write a script to check if a process is taking a number of seconds to respond, and if over that number kill it?
I've tried the timeout command, but the problem is it is a source dedicated sever, and when i edit it's bash script:
HL=./srcds_linux
echo "Using default binary: $HL"
and change it to timeout 25 ./srcds_linux and run it as root it won't run the server:
ERROR: Source Engine binary '' not found, exiting
So assuming that I can't edit the servers bash script, is there a way to create a script that can check if any program, not executed w/ the script is timing out in x seconds?
It sounds like the problem is that you're changing the script wrong.
If you're looking at this script, the logic basically goes:
HL=./srcds_linux
if ! test -f "$HL"
then
echo "Command not found"
fi
$HL
It sounds like you're trying to set HL="timeout 25 ./srcds_linux". This will cause the file check to fail.
The somewhat more correct way is to change the invocation, not the file to invoke:
HL=./srcds_linux
if ! test -f "$HL
then
echo "Command not found"
fi
timeout 25 $HL
timeout kills the program if it takes too long, though. It doesn't care whether the program is responding to anything, just that it takes longer than 25 seconds doing it.
If the program appears to hang, you could e.g. check whether it stops outputting data for 25 seconds:
your_command_to_start_your_server | while read -t 25 foo; do echo "$foo"; done
echo "The command hasn't said anything for 25 seconds, killing it!"
pkill ./srcds_linux

Resources