I would like the slurm workload manager to do some action like touch stopped.txt at job termination either due to time out or failure. How can this be done?
When the job has terminated, there is no way for regular users to perform further actions. (Admins can use strigger or setup epilog scripts)
For termination due to time out, the typical course of action is to setup a Bash "trap" to catch a signal and request Slurm to send that signal a few minutes before the job is killed.
For termination due to failure, you can test the return code of your main program inside the submission script and act accordingly.
Another option, which could be seen as overkill, but is easier to implement, is to submit a "monitoring" job, dependent on the job after which some action must be taken, and have that job create the stopped.txt file based on the state of the job in the accounting.
Related
I need to set a Timeout, in a JCL step that calls a Unix script through bpxbtach. I did it with
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
However, After some time I realized that does not include the time in the queue. they say " This run time refers to actual execution time only, and does not include the time that the job spends in the INPUT or INPUT HOLD queues" https://supportline.microfocus.com/documentation/books/rd60/cbwjto.htm
That is microfocus JCL, but I verified the behavior is that on IBM Z too.
So even if I set the timeout to 10 seconds, the step can take several minutes if the queue is attending other things. I need a timeout that kills the step no matter the reason it took so long. I haven't been able to find what I need. Please help.
z/OS batch really isn't the best choice for time-critical work. As you figured out, the JCL "TIME" parameter is about CPU time consumption, not an elapsed time control. If this is a business-critical need, then by all means talk to your z/OS administrators - they can certainly configure your system such that your job is very likely to run without delay, but this isn't usually default behavior.
You don't provide a lot of detail as to what else your job might be doing and how it gets submitted. If you have the ability to control how your job is submitted, one option might be to spawn your shell script directly rather than submitting a batch process to run your script.
For example, what you've described is submitting JCL that spawns BPXBATCH, then BPXBATCH spawns your shell script. Instead, you might write a small C program that simply calls "spawn()" to run the shell as a distinct UNIX process - that's not difficult, depending on how you're submitting the JCL you shared. You cut out the need for the batch job - just run your script directly.
If you're running in a TSO environment, the OSHELL command lets you interactively run your script. You can even automate the whole process with a simple REXX script, and none of this requires a pass through a batch initiator.
If your site runs SSH or similar, you might consider launching your script through an SSH command - this even works across a network. SSH lets you launch a shell session and pass a command for execution...again, there's no JCL or input queue here.
If your administrators would allow it, another alternative would be to run your JCL via a "START" command. Unlike batch JCL, when a START command is encountered, the work you're starting runs immediately - there's no input queue for started tasks. Start commands can be issued from JCL too, and since they're issued as the JCL is scanned and not when the job starts, these are fairly immediate too.
Inside your shell script, it's pretty easy to setup an elapsed time limit - there are examples here.
I see a couple of problems in your code...
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
First, you have a space between BPXBATCH, and PARM= which will not execute your shell script and may result in a JCL error.
Second, you are using the TIME parameter of the EXEC statement, which limits CPU time, yet you reference a desire to cancel the job step if it waits more than some amount of time in the input queue, which is a clock time limitation.
There is no way to cancel the job from the job itself via JCL parameters based on clock time, either including or excluding time spent in the input queue.
If you really need to do this, I suggest you look into capabilities of your shop's job scheduler package. You might want to reexamine why you need to cancel a job if it doesn't run to completion within 10 clock seconds after you submit it.
I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
Wondering what would be a good strategy to run this, either to schedule this as a cron job on host or start a go program with infinite loop which execute the program and sleeps(Golang: Implementing a cron / executing tasks at a specific time)
If your task is...
On Unix
Stand alone
Periodic
Has an acceptable startup time
cron will be better than rolling your own scheduler just for the one service. It will guarantee the process will always run at the correct time and has rudimentary error reporting. There's no need to add a watchdog in case your infinite loop has an error, cron will run the process again in 5 minutes.
If cron is insufficient, look into other job schedulers before rolling your own.
I have an uploader service which needs to run every 5minutes and it definitely finished within 5 minutes so there are never two parallel session.
These are famous last words. I would suggest adding in some form of locking. For example, write your PID to a file in /var/run and check if that process is running. There's even a little pidfile library for Go.
Take a look on Systemd, you can execute a script with timers and set max execution time for the script.
https://wiki.archlinux.org/index.php/Systemd/Timers
I have one DAG that has three task streams (licappts, agents, agentpolicy):
For simplicity I'm calling these three distinct streams. The streams are independent in the sense that just because agentpolicy failed doesn't mean the other two (liceappts and agents) should be affected by the other streams failure.
But for the sourceType_emr_task_1 tasks (i.e., licappts_emr_task_1, agents_emr_task_1, and agentpolicy_emr_task_1) I can only run one of these tasks at a time. For example I can't run agents_emr_task_1 and agentpolicy_emr_task_1 at the same time even though they are two independent tasks that don't necessarily care about each other.
How can I achieve this functionality in Airflow? For now the only thing I can think of is to wrap that task in a script that somehow locks a global variable, then if the variable is locked I'll have the script do a Thread.sleep(60 seconds) or something, and then retry. But that seems very hacky and I'm curious if Airflow offers a solution for this.
I'm open to restructuring the ordering of my DAG if needed to achieve this. One thing I thought about doing was to make a hard coded ordering of
Dag Starts -> ... -> licappts_emr_task_1 -> agents_emr_task_1 -> agentpolicy_emr_task_1 -> DAG Finished
But I don't think combining the streams this way because then for example agentpolicy_emr_task_1 has to wait for the other two to finish before it can start and there could be times when agentpolicy_emr_task_1 is ready to go before the other two have finished their other tasks.
So ideally I want whatever sourceType_emr_task_1 task to start that's ready first and then block the other tasks from running their sourceType_emr_task_1 task until it's finished.
Update:
Another solution I just thought of is if there is a way for me to check on the status of another task I could create a script for sourceType_emr_task_1 that checks to see if any of the other two sourceType_emr_task_1 tasks have a status of running, and if they do it'll sleep and periodically check to see if none of the other's are running, in which case it'll start it's process. I'm not a big fan of this way though because I feel like it could cause a race condition where both read (at the same time) that none are running and both start running.
You could use a pool to ensure the parallelism for those tasks is 1.
For each of the *_emr_task_1 tasks, set a pool kwarg to to be something like pool=emr_task.
Then just go into the webserver -> admin -> pools -> create:
Set the name Pool to match the pool used in your operator, and the Slots to be 1.
This will ensure the scheduler will only allow tasks to be queued for that pool up to the number of slots configured, regardless of the parallelism of the rest of Airflow.
Suppose if i have cron tasks running every minute. And if each time, that task takes more than one minute to run, what will happen. Will the next cron wait for the first cron or will it run without any checks.
I want to run a cron task every minute and I don't over lapping cron tasks like that in case of a long running task/situation.
please help.
It depends on what you run. If it's your own script, you can implement a locking/lock checking mechanism to avoid running duplicates.
But that's not cron's job.
Yes, cron will go ahead and start your 1+ minute-running process every minute until something crashes.
You'll want to put a lock of some sort into your job if you can to basically do this at start-up:
if not get_lock()
print "Another process is running"
exit
This, of course, assumes that you own the code running. If you're running a command that you didn't code, then I'd recommend building a shell wrapper that implements the above pseudocoded logic where get_lock() will see if another process like this one is running.
As others have mentioned, CRON will run your script every minute regardless of whether another instance of your script is still running.
If you want to avoid this and don't fancy implementing your own locking mechanism then you could try using a CRON alternative called The Fat Controller which is a daemon that will continually re-run scripts. You can optionally specify an interval between runs and also optionally specify a maximum execution time so if a script goes AWOL then it can be killed.
There's some use cases and more information on the website:
http://fat-controller.sourceforge.net/
I'm trying to implement a JCL, in a JES2 environment, that launches a set of jobs with dependencies in it, for example:
JOB_A -> JOB_B )
JOB_C -> JOB_D ) -> JOB_E
In other words, JOB_E is only launched when JOB_B and JOB_D are finished.
I can launch JOB_B and JOB_D through job internal reader in JOB_A and JOB_C but I can't not create the dependencies for JOB_E.
I tried to explore JCL resource lock so that I could lock a data set in JOB_B and JOB_D that JOB_E needed so that JOB_E would only start when all data set's are available but the JCL only requests data set in STEP level and release them afterwards. If JCL could request all data set before start I could implement some sort of mutex in the JOBs, for example:
JOB_A locks data set DSN_A
JOB_B waits to get data set DSN_A
JOB_C locks data set DSN_C
JOB_D waits to get data set DSN_C
JOB_E waits to get data set DSN_A and DSN_C
How to do this?
I need this to test set of JCL's in a development environment without access to a scheduler.
Your comment that you need this to test in a development environment without access to a scheduler makes me wonder if your shop has a scheduler for the production environment. If it does, then your testing will not actually test what will be used in your production environment. Just something to think about if you haven't already.
In answer to your question, One technique is to use a utility such as IEBGENER in the last step of one job to submit a subsequent job.
For example, The last step of JOB_A would execute IEBGENER with SYSUT1 containing the execution JCL for JOB_B and SYSUT2 pointing at INTRDR. This is one technique you could use, though getting JOB_E to run so that it doesn't interfere with any of the other jobs might be tricky, as JOB_E needs to run after both JOB_B and JOB_D complete.
Another technique would be to use Rexx in batch mode to submit your jobs using the internal reader and then use the SDSF Rexx interface to watch for when they complete. Essentially you will be writing a special-purpose job scheduler, specific to your set of jobs.
Update, ten years later...
As of z/OS 2.2 IBM has added JES2 Execution Control Statements which "define the execution sequencing of a group of jobs and the jobs themselves". Prior to use of this feature, some configuration must be done my your z/OS Systems Programmer.
I'm wondering why to invest precious time to test a set of jobs, where the PROD set is entirely different and will be handled by some xyz scheduler. Don't mind, if I sound crazy but lemme propose mine too:
Assumption: Your jobs take manageable CPU and NO NEED to be run parallely.
A triggers B triggers C triggers D
triggers E (I know its not worthy but
your testing goes fine) I just put it
here by thinking what I would do if I
were you. I mainly need my testing to
go quick and fine. Lemme know your cliche.
Now, lemme appreciate you both for such resolution that we can manage submission of jobs by means of REXX that too creating a virtual and subjective scheduling.