Shell infinite loop to execute at specific time [duplicate] - linux

This question already has answers here:
Sleep until a specific time/date
(22 answers)
Closed 7 years ago.
I have access to a Linux CentOS box. (I can't use crontab sadly)
When I need to run a task everyday I have just created a infinite loop with a sleep. (it runs, sleeps ~24 hours and then runs again)
#!/bin/sh
while :
do
/home/sas_api_emailer.sh |& tee first_sas_api
sleep 1438m
done
Recently I have a task that I need to run at a specific time everyday 6:00 am (I can't use crontab)
How can I create an infinite loop that will only execute # 6:00 am?

Check the time in the loop, and then sleep for a minute if it's not the time you want.
while :
do
if [ $(date '+%H%M') = '0600' ]
then /home/sas_api_emailer.sh |& tee first_sas_api
fi
sleep 60
done

You have (at least!) three choices:
cron
This is hands-down the best choice. Unfortunately, you say it's not an option for you. Drag :(
at
at and batch read commands from standard input or a specified file
which are to be executed at a later time.
For example: at -f myjob noon
Here is more information about at: http://www.thegeekstuff.com/2010/06/at-atq-atrm-batch-command-examples/
Write a "polling" or "while loop" script. For example:
while true
# Compute wait time
sleep wait_time
# do something
done
Here are some good ideas for "compute wait time": Bash: Sleep until a specific time/date

Related

shell script put result of curl in variable followed by sleep command [duplicate]

This question already has answers here:
Assign variable in the background shell
(2 answers)
Closed 1 year ago.
I want to trigger curl requests every 400ms in shell script and put the results in a variable, and after finishing the curl request (eg 10 requests) finally write all results in a file. when I use the following code for this purpose
result="$(curl --location --request GET 'http://localhost:8087/say-hello')" & sleep 0.400;
Because & creates a new process result can not achieve. so what should I do?
You can use the -m curl option instead of the sleep.
-m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. See
also the --connect-timeout option.
The difference can be sound in the next sequence of commands:
a=1; a=$(echo 2) ; sleep 1; echo $a
2
and with a background process
a=1; a=$(echo 2) & sleep 1; echo $a
[1] 973
[1]+ Done a=$(echo 2)
1
Why is a not changed in the second case?
Actually it is changed... in a new environment. The & creates a new process with its own a, and that a is assigned the value 2. When the process is finished, the variable a of that subprocess is deleted and you only of the original value of a.
Depending on your requirements you might want to make a resultdir, have every background curl process write to a different tmpfile, wait with wait until all curls are finished and collect your results.

Linux - Run script after time period expires

I have a small NodeJS script that does some processing. Depending on the amount of data needing to be processed, this can take a couple of seconds to hours.
What I want is to do is schedule this command to run every hour after the previous attempt has completed. I'm wary of using something like cron because I need to ensure that two instances of the script aren't running at the same.
If you really don't like cron (or at) you can just use a simple bash script:
#!/bin/bash
while true
do
#Do something
echo Invoke long-running node.js script
#Wait an hour
sleep 3600
done
The (obvious) drawback is that you will have to make it run in background somehow (i.e. via nohup or screen) and add a proper error handling (taking that you script might fail, and you still want it to run again in an hour).
A bit more elaborate "custom script" solution might be like that:
#!/bin/bash
#Settings
LAST_RUN_FILE=/var/run/lock/hourly.timestamp
FLOCK_LOCK_FILE=/var/run/lock/hourly.lock
FLOCK_FD=100
#Minimum time to wait between two job runs
MIN_DELAY=3600
#Welcome message, parameter check
if [ -z $1 ]
then
echo "Please specify the command (job) to run, as follows:"
echo "./hourly COMMAND"
exit 1
fi
echo "[$(date)] MIN_DELAY=$MIN_DELAY seconds, JOB=$#"
#Set an exclusive lock, or skip execution if it is already set
eval "exec $FLOCK_FD>$FLOCK_LOCK_FILE"
if ! flock -n $FLOCK_FD
then
echo "Lock is already set, skipping execution."
exit 0
fi
#Last run timestamp
if ! [ -e $LAST_RUN_FILE ]
then
echo "Timestamp file ($LAST_RUN_FILE) is missing, creating a new one."
echo 0 >$LAST_RUN_FILE
fi
#Compute delay, and wait
let DELAY="$MIN_DELAY-($(date +%s)-$(cat $LAST_RUN_FILE))"
if [ $DELAY -gt 0 ]
then
echo "Waiting for $DELAY seconds, before proceeding..."
sleep $DELAY
fi
#Proceed with an actual task
echo "[$(date)] Running the task..."
echo
"$#"
#Update the last run timestamp
echo
echo "Done, going to update the last run timestamp now."
date +%s >$LAST_RUN_FILE
This will do 2 things:
Set an exclusive execution lock (with flock), so that no two instances of the job will run at the same time, irregardless of how you start them (manually or via cron e.t.c.);
If the last job was completed less then MIN_DELAY seconds ago,
it will sleep for the remaining time, before running the job again;
Now, if you schedule this script to run, say every 15 minutes with cron, like that:
* * * * * /home/myuser/hourly my_periodic_task and it's arguments
It will be guaranteed to execute with the fixed delay of at least MIN_DELAY (one hour) since the last job completed, and any intermediate runs will be skipped.
In the worst case, it will execute in MIN_DELAY + 15 minutes,
(as the scheduling period is discrete), but never earlier than that.
Other non-cron scheduling methods should work too (i.e. just running this script in a loop, or re-scheduling and each run with at).
You can use a cron and add process.exit(0) to your node script

Is it possible to set time out from bash script? [duplicate]

This question already has answers here:
How do I limit the running time of a BASH script
(5 answers)
Closed 7 years ago.
Sometimes my bash scripts are hanging and hold without clear reason
So they actually can hang for ever ( script process will run until I kill it )
Is it possible to combine in the bash script time out mechanism in order to exit from the program after for example ½ hour?
This Bash-only approach encapsulates all the timeout code inside your script by running a function as a background job to enforce the timeout:
#!/bin/bash
Timeout=1800 # 30 minutes
function timeout_monitor() {
sleep "$Timeout"
kill "$1"
}
# start the timeout monitor in
# background and pass the PID:
timeout_monitor "$$" &
Timeout_monitor_pid=$!
# <your script here>
# kill timeout monitor when terminating:
kill "$Timeout_monitor_pid"
Note that the function will be executed in a separate process. Therefore the PID of the monitored process ($$) must be passed. I left out the usual parameter checking for the sake of brevity.
If you have Gnu coreutils, you can use the timeout command:
timeout 1800s ./myscript
To check if the timeout occurred check the status code:
timeout 1800s ./myscript
if (($? == 124)); then
echo "./myscript timed out after 30 minutes" >>/path/to/logfile
exit 124
fi

Bash script how to sleep in new process then execute a command

So, I was wondering if there was a bash command that lets me fork a process which sleeps for several seconds, then executes a command.
Here's an example:
sleep 30 'echo executing...' &
^This doesn't actually work (because the sleep command only takes the time argument), but is there something that could do something like this? So, basically, a sleep command that takes a time argument and something to execute when the interval is completed? I want to be able to fork it into a different process then continue processing the shell script.
Also, I know I could write a simple script that does this, but due to some restraints to the situation (I'm actually passing this through a ssh call), I'd rather not do that.
You can do
(sleep 30 && command ...)&
Using && is safer than ; because it ensures that command ... will run only if the sleep timer expires.
You can invoke another shell in the background and make it do what you want:
bash -c 'sleep 30; do-whatever-else' &
The default interval for sleep is in seconds, so the above would sleep for 30 seconds. You can specify other intervals like: 30m for 30 minutes, or 1h for 1 hour, or 3d for 3 days.

Instance limited cron job

I want to run a cron job every minute that will launch a script. Simple enough there. However, I need to make sure that not more than X number (defined in the script) of instances are ever running. These are queue workers, so if at any minute interval 6 workers are still active, then I would not launch another instance. The script simply launches a PHP script which exits if no job available. Right now I have a shell script that perpetually launches itself every 10 seconds after exit... but there are long periods of time where there are no jobs, and a minute delay is fine. Eventually I would like to have two cron jobs for peak and off-peak, with different intervals.
Make sure you have unique script name.
Then check if 6 instances are already running
if [ $(pgrep '^UNIQUE_SCIPT_NAME$' -c) -lt 6 ]
then
# start my script
else
# do not start my script
fi
I'd say that if you want to iterate as often as every minute, then a process like your current shell script that relaunches itself is what you actually want to do. Just increase the delay from 10 seconds to a minute.
That way, you can also more easily control your delay for peak and off-peak, as you wanted. It would be rather elegant to simply use a shorter delay if the script found something to do the last time it was launched, or a longer delay if it did not find anything.
You could use a script like OneAtATime to guard against multiple simultaneous executions.
This is what i am using in my shell scripts:
echo -n "Checking if job is already running... "
me=`basename $0`
running=$(ps aux | grep ${me} | grep -v .log | grep -v grep | wc -l)
if [ $running -gt 1 ];
then
echo "already running, stopping job"
exit 1
else
echo "OK."
fi;
The command you're looking for is in line 3. Just replace $(me) with your php script name. In case you're wondering about the grep .log part: I'm piping the output into a log file, whose name partially contains the script name, so this way i'm avoiding it to be double-counted.

Resources