I have two commands in a cron job like this:
mysql -xxxxxx -pyyyyyyyyyyv -hlocalhost -e "call MyFunction1";wget -N http://mywebsite.net/path/AfterMyFunction1.php
but it seems to me that both of them are running at the same time.
How can I make the first command run and when it completes, execute the second command?
Also the AfterMyFunction1.php have javascript http requests that are not executed when I use wget. It works if I opened AfterMyFunction1.php in my webbrowser.
If the first command is required to be completed first, you should separate them with the && operator as you would in the shell. If the first fails, the second will not run.
You could use sem which is part of GNU parallel.
0 0 * * * root sem --jobs 1 --id MyQueue mysql -xxxxxx -pyyyyyyyyyyv -hlocalhost -e "call MyFunction1"
1 0 * * * root sem --jobs 1 --id MyQueue wget -N http://mywebsite.net/path/AfterMyFunction1.php
This cron config will first start the mysql through sem, which will put it in a kind of queue called MyQueue. This queue will probably be empty, so the mysql is executed immediately. A minute later, the cron will start another sem which will put the wget in the same queue. With --jobs 1, sem is instructed to execute only one job at a time in that particular queue. As soon as the mysql has finished, the second sem will run the wget command. sem has plenty of options to control the queueing behaviour. For example, if you add --semaphoretimeout -60, a waiting job will simply die after 60 seconds.
The && solution is probably better, since it won't execute the second command when the first one fails. The sem solution has the advantage that you can specify different cron settings, like a different user. And it will prevent overlapping cron jobs, if the cron interval is shorter than the job duration.
Related
I'm running crontab-ui and php inside a docker container deployed to azure. Every cron job I set up it runs 3 times (email sent 3 times, logged 3 times). Tried a different approach on another container and got the same result.
Here is my crontab:
* * * * * sh /usr/local/bin/triage-rotate.sh
* * * * * sh /usr/local/bin/wp-cron.sh
and here is wp-cron.sh
#!/bin/sh
ps -ef | grep "wp cron" | grep -v grep
process=`ps -ef | grep "wp cron" | grep -v grep | wc -l`
echo $process
if [ $process -eq 0 ]; then
wp cron event run --due-now --path=/var/www/html/ --allow-root
fi
I was watching top on terminal and wp-cron.sh only gets triggered once. I have a wp scheduled event that sends an email twicedaily and I receive 3 emails every time.
Any thoughts?
This issue may be possible that your hook might be being executed more than once, you can prevent execution of hook more than once on your specific code using did_action.
Ensure you should set WordPress Cron to "NO if you are utilizing the cron job URL to initiate sending. Otherwise, if multiple people attempt to send at the same time, multiple email will be triggered at the end
Check whether you accidentally initialized the cron service at twice and try to stop cron # /etc/init.d/crond and start it again # /etc/init.d/crond start,
please check whether your system have two cron daemon. The fact that two cron daemons are operating simultaneously due to which these issues may occur. Also, check another user installed and/or activated or needs to be active at once.
Please check this similar So thread by erotsppa
Basic information about my system: I have a music system where people can schedule songs to start and end at a specific time.
OS: Arch linux
It sets two crons at the moment. One lets say at 1.50 (start time with a command like "play etc") and another set at 3.20 (end time with a command like "end etc").
My setup works perfectly and i can end delete schedules etc etc but i now noticed an issue! If i set the above times and turn the system off (My system is a raspberry pi) and turn back on at lets say 2.00 and i missed the 1.50 deadline, the music doesnt start (obviously) and i want to try make it so no matter what time i turn it on within a range lets say: 1.50 - 3.20 it will start the play command. But it will run the command once!
I looked around and the commands i got was like:
0 1.50-3.20/2 * * * your_command.sh
But thats to run every 2 hours. I want it to run once only between these times?
Thanks!
You could add an additional cron job which starts a script on every reboot. For instance, you could add a line like this to your crontab:
#reboot /home/pi/startplayback.sh
Your startplayback.sh script should check if current time is within the desired period and run the desired command if it is. For example the code below will print PLAY! if the script is run between 1:50 and 3:20. You could replace echo 'PLAY!' by run WHATEVER
#!/bin/bash
current=$(date '+%H%M')
(( current=(10#$current) ))
((current > 150 && current < 320 )) && echo 'PLAY!'
P.S. Don't forget to make your script executable sudo chmod +x startplayback.sh
You might want to look at the at command and its utilities.
SYNOPSIS
at [-q queue] [-f file] [-mldbv] time
at [-q queue] [-f file] [-mldbv] -t [[CC]YY]MMDDhhmm[.SS]
at -c job [job ...]
at -l [job ...]
at -l -q queue
at -r job [job ...]
atq [-q queue] [-v]
atrm job [job ...]
batch [-q queue] [-f file] [-mv] [time]
at is good for scheduling one time jobs to be run at some point in the future. It maintains a queue of these jobs, so you can use it to schedule things with a great variety of different time specifications.
Cron is in my opinion a scheduler for jobs that are to be repeated over and over.
So a quick and dirty example for you:
echo 'ls -lathF' | at now + 1 minute
As expected you will see a job to be run in one minute. Try atq to see the list of jobs.
When the job is done, output will be mailed to your user by default.
I solved the issue by creating a PHP file and load the page on reboot then do its work and redirect back to such and such.
I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.
I want to get the details of the last run cron job. If the job is interrupted due to some internal problems, I want to re-run the cron job.
Note: I don't have superuser privilege.
You can see the date, time, user and command of previously executed cron jobs using:
grep CRON /var/log/syslog
This will show all cron jobs. If you only wanted to see jobs run by a certain user, you would use something like this:
grep CRON.*\(root\) /var/log/syslog
Note that cron logs at the start of a job so you may want to have lengthy jobs keep their own completion logs; if the system went down halfway through a job, it would still be in the log!
Edit: If you don't have root access, you will have to keep your own job logs. This can be done simply by tacking the following onto the end of your job command:
&& date > /home/user/last_completed
The file /home/user/last_completed would always contain the last date and time the job completed. You would use >> instead of > if you wanted to append completion dates to the file.
You could also achieve the same by putting your command in a small bash or sh script and have cron execute that file.
#!/bin/bash
[command]
date > /home/user/last_completed
The crontab for this would be:
* * * * * bash /path/to/script.bash
/var/log/cron contains cron job logs. But you need a root privilege to see.
CentOs,
sudo grep CRON /var/log/cron
I want to set a cron run after an other cron. For example: Cron A finishs at 01:00 PM, cron B will start at 01:01 PM. The problem is I don't know when cron A finishs.
I checked the crontab syntax. It doesn't provide any param for that purpose.
My actual situation is:
# This cron must run first.
? ? * * * /usr/local/bin/php -f /path/select_and_print_to_log_file.php
# two these crons runs at the same time.
0 13 * * * /usr/local/bin/php -f /path/update_user.php
0 13 * * * /usr/local/bin/php -f /path/update_image.php
# This cron runs right after two above cron completes.
? ? * * * /usr/local/bin/php -f /path/select_and_print_to_log_file.php
You can use the batch command inside the first cron to have the second thing being scheduled to run.
Your first job could produce a timestamp when finished.
Then you estimate - for example - that job A needs about 60 to 90 minutes. After 60 minutes, you start job B. Job b looks for the timestamp. If it is present, job B starts, else it waits for a minute and looks again.
After finishing, job B deletes the timestamp, or renames it, maybe from 'todo' to 'done'. You could insert the current date inside the file, to check, whether your estimation is still acceptable, or should be adjusted.
What I do in such cases (commonly a backup scenario where I don't want to thrash the disk by having concurrent backups) is to write a script that cron calls, and in the script have the actual tasks run serially.
Something like:
#!/bin/bash
/usr/local/bin/php -f /path/update_user.php
/usr/local/bin/someOtherTaskToRunSecond
YMMV.