The requirement is to run a cron on an hourly/daily basis.
But sometimes, the cron takes too long to complete before the subsequent execution.
So, when the subsequent execution starts, it shouldn't run 2 parallel jobs.
Instead, it is supposed to kill the already executing job & start a new one.
I tried the following but nothing worked.
pkill & run my script in cron - this, this & this
pgrep & kill instead of pkill in above solution - a one-liner bash using && & ;
run-one & run-this-one also in it.
What's the best way to do it?
timeout
* * * * * /usr/bin/timeout 59 /home/script.sh
The above cron runs every minute but the execution terminates after 59 seconds(if it has not finished).
Related
I want to search all the jobs which are running more that one hour. kill them. Then sleep for 60 mins. Again search if any job is running more than 60? loop the process.
If you want to find the PIDs for the processes running for more than 60 minutes on your linux box you can use a very simple and basic bash script like the one bellow:
#!/bin/sh
MIN=60
SEC=$((MIN*60))
ps -eo etimes=,pid= | while read sec pid; do
if [ ${sec} -gt ${SEC} ]; then
echo ${pid}
#kill -9 ${pid} # remove the # at the beginning of the line to actually kill those processes
fi
done
This will display the PIDs of the running processes, one per line
Assuming you name this script 60min.sh, you can run it every 60 minute using a cron job:
0 * * * * /bin/bash /path_to/60min.sh
This cron job will run your 60min.sh script every 60 minutes (or every hour)
Please keep in mind that you might accidentally kill system processes and your system might become unstable or unusable so you will have to reboot.
If you run different processes using a specific linux user I would recommend you to search the processes beloging to that user only and not to user root.
can we check process id or process of scheduled cronjob script
let say, I have scheduled one script "abc.sh" which runs all time and I have scheduled it crontab like below -
* * * * 0-5 script.sh
Once I schedule, I know it will keep running but can I check process status whether "script.sh" is running or not
Use pgrep or pidof:
pgrep script.sh
So, I was wondering if there was a bash command that lets me fork a process which sleeps for several seconds, then executes a command.
Here's an example:
sleep 30 'echo executing...' &
^This doesn't actually work (because the sleep command only takes the time argument), but is there something that could do something like this? So, basically, a sleep command that takes a time argument and something to execute when the interval is completed? I want to be able to fork it into a different process then continue processing the shell script.
Also, I know I could write a simple script that does this, but due to some restraints to the situation (I'm actually passing this through a ssh call), I'd rather not do that.
You can do
(sleep 30 && command ...)&
Using && is safer than ; because it ensures that command ... will run only if the sleep timer expires.
You can invoke another shell in the background and make it do what you want:
bash -c 'sleep 30; do-whatever-else' &
The default interval for sleep is in seconds, so the above would sleep for 30 seconds. You can specify other intervals like: 30m for 30 minutes, or 1h for 1 hour, or 3d for 3 days.
I have two commands in a cron job like this:
mysql -xxxxxx -pyyyyyyyyyyv -hlocalhost -e "call MyFunction1";wget -N http://mywebsite.net/path/AfterMyFunction1.php
but it seems to me that both of them are running at the same time.
How can I make the first command run and when it completes, execute the second command?
Also the AfterMyFunction1.php have javascript http requests that are not executed when I use wget. It works if I opened AfterMyFunction1.php in my webbrowser.
If the first command is required to be completed first, you should separate them with the && operator as you would in the shell. If the first fails, the second will not run.
You could use sem which is part of GNU parallel.
0 0 * * * root sem --jobs 1 --id MyQueue mysql -xxxxxx -pyyyyyyyyyyv -hlocalhost -e "call MyFunction1"
1 0 * * * root sem --jobs 1 --id MyQueue wget -N http://mywebsite.net/path/AfterMyFunction1.php
This cron config will first start the mysql through sem, which will put it in a kind of queue called MyQueue. This queue will probably be empty, so the mysql is executed immediately. A minute later, the cron will start another sem which will put the wget in the same queue. With --jobs 1, sem is instructed to execute only one job at a time in that particular queue. As soon as the mysql has finished, the second sem will run the wget command. sem has plenty of options to control the queueing behaviour. For example, if you add --semaphoretimeout -60, a waiting job will simply die after 60 seconds.
The && solution is probably better, since it won't execute the second command when the first one fails. The sem solution has the advantage that you can specify different cron settings, like a different user. And it will prevent overlapping cron jobs, if the cron interval is shorter than the job duration.
I want to set a cron run after an other cron. For example: Cron A finishs at 01:00 PM, cron B will start at 01:01 PM. The problem is I don't know when cron A finishs.
I checked the crontab syntax. It doesn't provide any param for that purpose.
My actual situation is:
# This cron must run first.
? ? * * * /usr/local/bin/php -f /path/select_and_print_to_log_file.php
# two these crons runs at the same time.
0 13 * * * /usr/local/bin/php -f /path/update_user.php
0 13 * * * /usr/local/bin/php -f /path/update_image.php
# This cron runs right after two above cron completes.
? ? * * * /usr/local/bin/php -f /path/select_and_print_to_log_file.php
You can use the batch command inside the first cron to have the second thing being scheduled to run.
Your first job could produce a timestamp when finished.
Then you estimate - for example - that job A needs about 60 to 90 minutes. After 60 minutes, you start job B. Job b looks for the timestamp. If it is present, job B starts, else it waits for a minute and looks again.
After finishing, job B deletes the timestamp, or renames it, maybe from 'todo' to 'done'. You could insert the current date inside the file, to check, whether your estimation is still acceptable, or should be adjusted.
What I do in such cases (commonly a backup scenario where I don't want to thrash the disk by having concurrent backups) is to write a script that cron calls, and in the script have the actual tasks run serially.
Something like:
#!/bin/bash
/usr/local/bin/php -f /path/update_user.php
/usr/local/bin/someOtherTaskToRunSecond
YMMV.