Docker alpine linux cron jobs running 3 times every time - linux

I'm running crontab-ui and php inside a docker container deployed to azure. Every cron job I set up it runs 3 times (email sent 3 times, logged 3 times). Tried a different approach on another container and got the same result.
Here is my crontab:
* * * * * sh /usr/local/bin/triage-rotate.sh
* * * * * sh /usr/local/bin/wp-cron.sh
and here is wp-cron.sh
#!/bin/sh
ps -ef | grep "wp cron" | grep -v grep
process=`ps -ef | grep "wp cron" | grep -v grep | wc -l`
echo $process
if [ $process -eq 0 ]; then
wp cron event run --due-now --path=/var/www/html/ --allow-root
fi
I was watching top on terminal and wp-cron.sh only gets triggered once. I have a wp scheduled event that sends an email twicedaily and I receive 3 emails every time.
Any thoughts?

This issue may be possible that your hook might be being executed more than once, you can prevent execution of hook more than once on your specific code using did_action.
Ensure you should set WordPress Cron to "NO if you are utilizing the cron job URL to initiate sending. Otherwise, if multiple people attempt to send at the same time, multiple email will be triggered at the end
Check whether you accidentally initialized the cron service at twice and try to stop cron # /etc/init.d/crond and start it again # /etc/init.d/crond start,
please check whether your system have two cron daemon. The fact that two cron daemons are operating simultaneously due to which these issues may occur. Also, check another user installed and/or activated or needs to be active at once.
Please check this similar So thread by erotsppa

Related

how to kill log running jobs in every one hour interval?

I want to search all the jobs which are running more that one hour. kill them. Then sleep for 60 mins. Again search if any job is running more than 60? loop the process.
If you want to find the PIDs for the processes running for more than 60 minutes on your linux box you can use a very simple and basic bash script like the one bellow:
#!/bin/sh
MIN=60
SEC=$((MIN*60))
ps -eo etimes=,pid= | while read sec pid; do
if [ ${sec} -gt ${SEC} ]; then
echo ${pid}
#kill -9 ${pid} # remove the # at the beginning of the line to actually kill those processes
fi
done
This will display the PIDs of the running processes, one per line
Assuming you name this script 60min.sh, you can run it every 60 minute using a cron job:
0 * * * * /bin/bash /path_to/60min.sh
This cron job will run your 60min.sh script every 60 minutes (or every hour)
Please keep in mind that you might accidentally kill system processes and your system might become unstable or unusable so you will have to reboot.
If you run different processes using a specific linux user I would recommend you to search the processes beloging to that user only and not to user root.

How do i activate cron command once within specific time frame?

Basic information about my system: I have a music system where people can schedule songs to start and end at a specific time.
OS: Arch linux
It sets two crons at the moment. One lets say at 1.50 (start time with a command like "play etc") and another set at 3.20 (end time with a command like "end etc").
My setup works perfectly and i can end delete schedules etc etc but i now noticed an issue! If i set the above times and turn the system off (My system is a raspberry pi) and turn back on at lets say 2.00 and i missed the 1.50 deadline, the music doesnt start (obviously) and i want to try make it so no matter what time i turn it on within a range lets say: 1.50 - 3.20 it will start the play command. But it will run the command once!
I looked around and the commands i got was like:
0 1.50-3.20/2 * * * your_command.sh
But thats to run every 2 hours. I want it to run once only between these times?
Thanks!
You could add an additional cron job which starts a script on every reboot. For instance, you could add a line like this to your crontab:
#reboot /home/pi/startplayback.sh
Your startplayback.sh script should check if current time is within the desired period and run the desired command if it is. For example the code below will print PLAY! if the script is run between 1:50 and 3:20. You could replace echo 'PLAY!' by run WHATEVER
#!/bin/bash
current=$(date '+%H%M')
(( current=(10#$current) ))
((current > 150 && current < 320 )) && echo 'PLAY!'
P.S. Don't forget to make your script executable sudo chmod +x startplayback.sh
You might want to look at the at command and its utilities.
SYNOPSIS
at [-q queue] [-f file] [-mldbv] time
at [-q queue] [-f file] [-mldbv] -t [[CC]YY]MMDDhhmm[.SS]
at -c job [job ...]
at -l [job ...]
at -l -q queue
at -r job [job ...]
atq [-q queue] [-v]
atrm job [job ...]
batch [-q queue] [-f file] [-mv] [time]
at is good for scheduling one time jobs to be run at some point in the future. It maintains a queue of these jobs, so you can use it to schedule things with a great variety of different time specifications.
Cron is in my opinion a scheduler for jobs that are to be repeated over and over.
So a quick and dirty example for you:
echo 'ls -lathF' | at now + 1 minute
As expected you will see a job to be run in one minute. Try atq to see the list of jobs.
When the job is done, output will be mailed to your user by default.
I solved the issue by creating a PHP file and load the page on reboot then do its work and redirect back to such and such.

My crond job doesn't work as expected, why?

I created a shell script to check a tomcat instance status. If the instance is not started, then start it:
if [ `ps -ef | grep 'travelco' | grep -v grep | wc -l` -eq 0 ];then
sudo /home/q/tools/bin/restart_tomcat.sh /home/www/travelco/
else
echo 'travelco started'
fi
Then I tested the script and it worked well. But after I added it as a crond job, this script didn't work as expected.
I used crontab -e, and added
*/1 * * * * /home/yuliang.jin/travelcoCheck.sh
After that, even though I can see the script executed in the crontab log(sudo tail -f /var/log/cron), the tomcat instance was not started. Why?
There's a sudo in your script but are you sure that your current user has the permission to execute /home/q/tools/bin/restart_tomcat.sh without password authentication?
You should add the script to /etc/sudoers to allow your current user to execute the script without password, or you can just sudo crontab -e to run the script as root (and don't forget to delete sudo in your script if you do so).
If there is any other option, don't sudo in a cron job.
travelcoCheck.sh will be matched by the grep travelco and is not cancelled by the grep -v grep, so wc -l will be at least 1 always. So restart_tomcat.sh will not run.
(As a side note: whether or not your ps-parsing stack gets caught by ps is something of a dark art and is full of corner cases and race conditions and generally difficult to get to work right. Stuff like this is why dbus was invented.)

Instance limited cron job

I want to run a cron job every minute that will launch a script. Simple enough there. However, I need to make sure that not more than X number (defined in the script) of instances are ever running. These are queue workers, so if at any minute interval 6 workers are still active, then I would not launch another instance. The script simply launches a PHP script which exits if no job available. Right now I have a shell script that perpetually launches itself every 10 seconds after exit... but there are long periods of time where there are no jobs, and a minute delay is fine. Eventually I would like to have two cron jobs for peak and off-peak, with different intervals.
Make sure you have unique script name.
Then check if 6 instances are already running
if [ $(pgrep '^UNIQUE_SCIPT_NAME$' -c) -lt 6 ]
then
# start my script
else
# do not start my script
fi
I'd say that if you want to iterate as often as every minute, then a process like your current shell script that relaunches itself is what you actually want to do. Just increase the delay from 10 seconds to a minute.
That way, you can also more easily control your delay for peak and off-peak, as you wanted. It would be rather elegant to simply use a shorter delay if the script found something to do the last time it was launched, or a longer delay if it did not find anything.
You could use a script like OneAtATime to guard against multiple simultaneous executions.
This is what i am using in my shell scripts:
echo -n "Checking if job is already running... "
me=`basename $0`
running=$(ps aux | grep ${me} | grep -v .log | grep -v grep | wc -l)
if [ $running -gt 1 ];
then
echo "already running, stopping job"
exit 1
else
echo "OK."
fi;
The command you're looking for is in line 3. Just replace $(me) with your php script name. In case you're wondering about the grep .log part: I'm piping the output into a log file, whose name partially contains the script name, so this way i'm avoiding it to be double-counted.

Is there a variable in Linux that shows me the last time the machine was turned on?

I want to create a script that, after knowing that my machine has been turned on for at least 7h, it does something.
Is this possible? Is there a system variable or something like that that shows me the last time the machine was turned on?
The following command placed in /etc/rc.local:
echo 'touch /tmp/test' | at -t $(date -d "+7 hours" +%m%d%H%M)
will create a job that will run a touch /tmp/test in seven hours.
To protect against frequent reboots and prevent adding multiple jobs you could use one at queue exclusively for this type of jobs (e.g. c queue). Adding -q c to the list of at parameters will place the job in the c queue. Before adding new job you can delete all jobs from c queue:
for job in $(atq -q c | sed 's/[ \t].*//'); do atrm $job; done
You can parse the output of uptime I suppose.
As Pavel and thkala point out below, this is not a robust solution. See their comments!
The uptime command shows you how long the system has been running.
To accomplish your task, you can make a script that first does sleep 25200 (25200 seconds = 7 hours), and then does something useful. Have this script run at startup, for example by adding it to /etc/rc.local. This is a better idea than polling the uptime command to see if the machine has been up for 7 hours (which is comparable to a kid in the backseat of a car asking "are we there yet?" :-))
Just wait for uptime to equal seven hours.
http://linux.die.net/man/1/uptime
I don't know if this is what you are looking for, but uptime command will give you for how many computer was running since last reboot.
$ cut -d ' ' -f 1 </proc/uptime
This will give you the current system uptime in seconds, in floating point format.
The following could be used in a bash script:
if [[ "$(cut -d . -f 1 </proc/uptime)" -gt "$(($HOURS * 3600))" ]]; then
...
fi
Add the following to your crontab:
#reboot sleep 7h; /path/to/job
Either /etc/crontab, /etc/cron.d/, or your users crontab, depending on whether you want to run it as root or the user -- don't forget to put "root" after "#reboot" if you put it in /etc/crontab or cron.d
This has the benefit that if you reboot multiple times, the jobs get cancelled at shut down, so you won't get a bunch of them stacking up if you reboot several times within 7 hours. The "#reboot" time specification triggers the job to be run once when the system is rebooted. "sleep 7h;" waits for 7 hours before running "/path/to/job".

Resources