cron jobs: Monitor time it takes for jobs to finish - linux

I'm doing a research project that requires I monitor cron jobs on a Ubuntu Linux system. I have collected data about the jobs' tasks and when they are started, I just don't know of a way to monitor how long they take to finish running.
I could calculate the time of finishing the task minus starting it with something like this but that would require doing that on the Shell scripts of each cron job. That's not necessarily difficult by any means but it seems a little silly that cron wouldn't in some way log this, so I'm trying to find an easier way :P
tl;dr Figure out time cron jobs take from start to finish

You could just put time in front of your crontabs, and if you're getting notifications about cron script outputs, it'll get sent to you.
For example, if you had:
0 1,13 * * * /maint/run_webalizer.sh
add time in front
0 1,13 * * * time /maint/run_webalizer.sh
and you'll get some output that looks like (the "real" is the time you want):
real 3m1.255s
user 0m37.890s
sys 0m3.492s
If you don't get cron notifications, you can just pipe the output to a file.

man time. Maybe you can create a wrapper and tell Cron to use it as your "shell" or something like that.

Cronitor (https://cronitor.io) is a tool I built exactly for this purpose. It uses http requests to record the start and end of your jobs.
You'll be notified if your job doesn't run on schedule, or if it runs for too long/too short. You can also configure it to send alerts to you via email, sms, but also Slack, Hipchat, Pagerduty and others.

I use the Jenkins CI to do this via its external-monitor-job plugin. Jenkins can track start and end times, track overall execution time over time, save the output of all jobs it tracks, and present success/failure conditions graphically.
https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+external+jobs

Related

Timeout including time in queue JCL Z os IBM

I need to set a Timeout, in a JCL step that calls a Unix script through bpxbtach. I did it with
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
However, After some time I realized that does not include the time in the queue. they say " This run time refers to actual execution time only, and does not include the time that the job spends in the INPUT or INPUT HOLD queues" https://supportline.microfocus.com/documentation/books/rd60/cbwjto.htm
That is microfocus JCL, but I verified the behavior is that on IBM Z too.
So even if I set the timeout to 10 seconds, the step can take several minutes if the queue is attending other things. I need a timeout that kills the step no matter the reason it took so long. I haven't been able to find what I need. Please help.
z/OS batch really isn't the best choice for time-critical work. As you figured out, the JCL "TIME" parameter is about CPU time consumption, not an elapsed time control. If this is a business-critical need, then by all means talk to your z/OS administrators - they can certainly configure your system such that your job is very likely to run without delay, but this isn't usually default behavior.
You don't provide a lot of detail as to what else your job might be doing and how it gets submitted. If you have the ability to control how your job is submitted, one option might be to spawn your shell script directly rather than submitting a batch process to run your script.
For example, what you've described is submitting JCL that spawns BPXBATCH, then BPXBATCH spawns your shell script. Instead, you might write a small C program that simply calls "spawn()" to run the shell as a distinct UNIX process - that's not difficult, depending on how you're submitting the JCL you shared. You cut out the need for the batch job - just run your script directly.
If you're running in a TSO environment, the OSHELL command lets you interactively run your script. You can even automate the whole process with a simple REXX script, and none of this requires a pass through a batch initiator.
If your site runs SSH or similar, you might consider launching your script through an SSH command - this even works across a network. SSH lets you launch a shell session and pass a command for execution...again, there's no JCL or input queue here.
If your administrators would allow it, another alternative would be to run your JCL via a "START" command. Unlike batch JCL, when a START command is encountered, the work you're starting runs immediately - there's no input queue for started tasks. Start commands can be issued from JCL too, and since they're issued as the JCL is scanned and not when the job starts, these are fairly immediate too.
Inside your shell script, it's pretty easy to setup an elapsed time limit - there are examples here.
I see a couple of problems in your code...
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
First, you have a space between BPXBATCH, and PARM= which will not execute your shell script and may result in a JCL error.
Second, you are using the TIME parameter of the EXEC statement, which limits CPU time, yet you reference a desire to cancel the job step if it waits more than some amount of time in the input queue, which is a clock time limitation.
There is no way to cancel the job from the job itself via JCL parameters based on clock time, either including or excluding time spent in the input queue.
If you really need to do this, I suggest you look into capabilities of your shop's job scheduler package. You might want to reexamine why you need to cancel a job if it doesn't run to completion within 10 clock seconds after you submit it.

Infinte loop vs cron job

I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
Wondering what would be a good strategy to run this, either to schedule this as a cron job on host or start a go program with infinite loop which execute the program and sleeps(Golang: Implementing a cron / executing tasks at a specific time)
If your task is...
On Unix
Stand alone
Periodic
Has an acceptable startup time
cron will be better than rolling your own scheduler just for the one service. It will guarantee the process will always run at the correct time and has rudimentary error reporting. There's no need to add a watchdog in case your infinite loop has an error, cron will run the process again in 5 minutes.
If cron is insufficient, look into other job schedulers before rolling your own.
I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
These are famous last words. I would suggest adding in some form of locking. For example, write your PID to a file in /var/run and check if that process is running. There's even a little pidfile library for Go.
Take a look on Systemd, you can execute a script with timers and set max execution time for the script.
https://wiki.archlinux.org/index.php/Systemd/Timers

How to handle overtime crons

Suppose if i have cron tasks running every minute. And if each time, that task takes more than one minute to run, what will happen. Will the next cron wait for the first cron or will it run without any checks.
I want to run a cron task every minute and I don't over lapping cron tasks like that in case of a long running task/situation.
please help.
It depends on what you run. If it's your own script, you can implement a locking/lock checking mechanism to avoid running duplicates.
But that's not cron's job.
Yes, cron will go ahead and start your 1+ minute-running process every minute until something crashes.
You'll want to put a lock of some sort into your job if you can to basically do this at start-up:
if not get_lock()
print "Another process is running"
exit
This, of course, assumes that you own the code running. If you're running a command that you didn't code, then I'd recommend building a shell wrapper that implements the above pseudocoded logic where get_lock() will see if another process like this one is running.
As others have mentioned, CRON will run your script every minute regardless of whether another instance of your script is still running.
If you want to avoid this and don't fancy implementing your own locking mechanism then you could try using a CRON alternative called The Fat Controller which is a daemon that will continually re-run scripts. You can optionally specify an interval between runs and also optionally specify a maximum execution time so if a script goes AWOL then it can be killed.
There's some use cases and more information on the website:
http://fat-controller.sourceforge.net/

How to define frequency of a job in application by users?

I have an application that has to launch jobs repeatingly. But (yes, that would have been to easy without a but...) I would like users to define their backup frequency in application.
In worst case, they would have to choose between :
weekly,
daily,
every 12 hours,
every 6 hours,
hourly
In best case, they should be able to use crontab expressions (see documentation for example)
How to do this? Do I launch a job every minutes that check for last execution time, frequency and then launches another job if needed? Do I create a sort of queue that will be executed by a masterjob?
Any clues, ideas, opinions, best pratices, experiences are welcome!
EDIT : Solved this problem using Akka scheduler. Ok, this is a technical solution not a design answer but still everything works great.
Each user defined repetition is an actor that send messages every period to a new actor to execute the actual job.
There may be two ways to do this depending on your requirements/architecture:
If you can only use Play:
The user creates the job and the frequency it will run (crontab, whatever).
On saving the job, you calculate the first time it will have to be run. You then add an entry to a table JOBS with the execution time, job id, and any other information required. This is required as Play is stateless and information must be stored in the DB for later retrieval.
You have a job that queries the table for entries whose execution date is less than now. Retrieves the first, runs it, removes it from the table and adds a new entry for next execution. You should keep some execution counter so if a task fails (which means the entry is not removed from DB) it won't block execution of the other tasks by the job trying again and again.
The frequency of this job is set to run every second. That way while there is information in the table, you should execute the request around as often as they are required. As Play won't spawn a new job while the current one is working if you have enough tasks this one job will serve all. If not, it will be killed at some point and restored when required.
Of course, the crons of the users will not be too precise, as you have to account for you own cron delays plus execution delays on all the tasks in queue, which will be run sequentially. Not the best approach, unless you somehow disallow crons which run every second or more often than every minute (to be safe). Doing a check on execution time of the crons to kill them if they are over a certain amount of time would be a good idea.
If you can use more than Play:
The better alternative I believe is to use Quartz (see this) to create a future execution when the user creates the job, and reproram it once the execution is over.
There was a discussion on google-groups about it. As far as I remember you must define a job which start every 6 hours and check which backups must be done. So you must remember when the last backup job was finished and make the control yourself. I'm unsure if Quartz can handle such a requirement.
I looked in the source-code (always a good source ;-)) and found a method every, where I think this should be do what you want. How ever I'm unsure if this is a clever design, because if you have 1000 user you will have then 1000 Jobs. I'm unsure if Play was build to handle such a large number of jobs.
[Update] For cron-expressions you should have a look into JobPlugin.scheduleForCRON()
There are several ways to solve this.
If you don't have a really huge load of jobs, I'd just persist them to a table using the required flexibility. Then check all of them every hour (or the lowest interval you support) and run those eligible. Simple.
Or, if you prefer to use cron syntax anyway, just write (export) jobs to a user crontab using a wrapper which calls back to your running app, or starts the job in a standalone process if that's possible.

How can I setup a system to tell me if a cron job is NOT running fine?

This is more of an "general architecture" problem. If you have a cron job (or even a Windows scheduled task) running periodically, its somewhat simple to have it send you an email / text message that all is well, but how do I get informed when everything is NOT okay? Basically, if the job doesn't run at its scheduled time or Windows / linux has its own set of hangups that prevent the task from running...?
Just seeking thoughts of people who've faced this situation before and come up with interesting solutions...
A way I've done it in the past is to simply put at the top of each script (say, checkUsers.sh):
touch /tmp/lastrun/checkUsers.sh
then have another job that runs periodically that uses find to locate all those "marker" files in tmp/lastrun that are older than a day.
You can fiddle with the timings, having /tmp/lastrun/hour/ and tmp/lastrun/day/ to separate jobs that have different schedules.
Note that this won't catch scripts that have never run since they will never create the initial file for find-ing. To alleviate that, you can either:
create that file manually when creating the cron job (won't handle situations where someone inadvertently deletes the marker file); or
maintain a list of required marker files somewhere so that you can detect when they're missing as well as outdated.
And, if your cron job is not a script, put the touch directly into crontab:
0 4 * * * ( touch /tmp/lastrun/daily/checkUsers ; /usr/bin/checkUsers )
It's a lot easier to validate a simple find script than to validate every one of your cron jobs.

Resources