Force qsub (PBS) to wait the job's end before exiting - pbs

I've been using Sun Grid Engine to run my jobs on a node of a cluster.
Usually I would wait for the job to complete before exiting and I use:
qsub -sync yes perl Script.pl
However now I don't use anymore Sun Grid Engine but PBS Pro 10.4
I'm not able to find a corresponding instruction to -sync.
Could someone help me?
Thanks in advance

PBSPro doesn't have a -sync equivalent but you might be able to use the
-I option combined with the use of expect to tell it what code to run in order to get the same effect.

The equivalent of -sync for PBS is -Wblock=true.
This prevents qsub from exiting until the job has completed. It is perhaps unusual to need this, but I found it useful when using some software that was not designed for HPC. The software executes multiple instances of a worker program, which run simultaneously. However, it then has to wait for one (or sometimes more) of the instances to complete, and do some work with the results, before spawning the next. If the worker program completes without writing a particular file, it is assumed to have failed. I was able to write a wrapper script for the worker program, to qsub it, and used the -Wblock=true option to make it wait for the worker program job to complete.

Related

Timeout including time in queue JCL Z os IBM

I need to set a Timeout, in a JCL step that calls a Unix script through bpxbtach. I did it with
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
However, After some time I realized that does not include the time in the queue. they say " This run time refers to actual execution time only, and does not include the time that the job spends in the INPUT or INPUT HOLD queues" https://supportline.microfocus.com/documentation/books/rd60/cbwjto.htm
That is microfocus JCL, but I verified the behavior is that on IBM Z too.
So even if I set the timeout to 10 seconds, the step can take several minutes if the queue is attending other things. I need a timeout that kills the step no matter the reason it took so long. I haven't been able to find what I need. Please help.
z/OS batch really isn't the best choice for time-critical work. As you figured out, the JCL "TIME" parameter is about CPU time consumption, not an elapsed time control. If this is a business-critical need, then by all means talk to your z/OS administrators - they can certainly configure your system such that your job is very likely to run without delay, but this isn't usually default behavior.
You don't provide a lot of detail as to what else your job might be doing and how it gets submitted. If you have the ability to control how your job is submitted, one option might be to spawn your shell script directly rather than submitting a batch process to run your script.
For example, what you've described is submitting JCL that spawns BPXBATCH, then BPXBATCH spawns your shell script. Instead, you might write a small C program that simply calls "spawn()" to run the shell as a distinct UNIX process - that's not difficult, depending on how you're submitting the JCL you shared. You cut out the need for the batch job - just run your script directly.
If you're running in a TSO environment, the OSHELL command lets you interactively run your script. You can even automate the whole process with a simple REXX script, and none of this requires a pass through a batch initiator.
If your site runs SSH or similar, you might consider launching your script through an SSH command - this even works across a network. SSH lets you launch a shell session and pass a command for execution...again, there's no JCL or input queue here.
If your administrators would allow it, another alternative would be to run your JCL via a "START" command. Unlike batch JCL, when a START command is encountered, the work you're starting runs immediately - there's no input queue for started tasks. Start commands can be issued from JCL too, and since they're issued as the JCL is scanned and not when the job starts, these are fairly immediate too.
Inside your shell script, it's pretty easy to setup an elapsed time limit - there are examples here.
I see a couple of problems in your code...
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
First, you have a space between BPXBATCH, and PARM= which will not execute your shell script and may result in a JCL error.
Second, you are using the TIME parameter of the EXEC statement, which limits CPU time, yet you reference a desire to cancel the job step if it waits more than some amount of time in the input queue, which is a clock time limitation.
There is no way to cancel the job from the job itself via JCL parameters based on clock time, either including or excluding time spent in the input queue.
If you really need to do this, I suggest you look into capabilities of your shop's job scheduler package. You might want to reexamine why you need to cancel a job if it doesn't run to completion within 10 clock seconds after you submit it.

Automatic qsub job completion status notification

I have a shell script that calls five other scripts from it. The first script creates 50 qsub jobs in the cluster. Individual job execution time varies from a couple of minutes to an hour. I need to know when all the 50 jobs get finished because after completing all the jobs I need to run the second script. How to find whether all the qsub jobs are completed or not? One possible solution can be using an infinite loop and check job status by using qstate command with job ID. In this case, I need to check the job status continuously. It is not an excellent solution. Is it possible that after execution, qsub job will notify me by itself. Hence, I don't need to monitor frequently job status.
qsub is capable of handling job dependencies, using -W depend=afterok:jobid.
e.g.
#!/bin/bash
# commands to run on the cluster
COMMANDS="script1.sh script2.sh script3.sh"
# intiliaze JOBID variable
JOBIDS=""
# queue all commands
for CMD in $COMMANDS; do
# queue command and store the job id
JOBIDS="$JOBIDS:`qsub $CMD`"
done
# queue post processing, depended on the submitted jobs
qsub -W depend=afterok:$JOBIDS postprocessing.sh
exit 0
More examples can be found here http://beige.ucs.indiana.edu/I590/node45.html
I never heard about how to do that, and I would be really interested if someone came with a good answer.
In the meanwhile, I suggest that you use file tricks. Either your script outputs a file at the end, or you check for the existence of the log files (assuming they are created only at the end).
while [ ! -e ~/logs/myscript.log-1 ]; do
sleep 30;
done

Does a Cron job overwrite itself

I can't seem to get my script to run in parallel every minute via cron on Ubuntu 14.
I have created a cron job which executes every minute. The cron job executes a script that runs much longer than a minute. When a minute expires it seems the new cron execution overwrites the previous execution. Is this correct? Any ideas welcomed.
I need concurrent independent running jobs. The cron job runs a script which queries a mysql database. The idea is to poll a db- if yes execute script in its own process.
cron will not stop a previous execution of a process to start a new one. cron will simply kick off the new process even though the old process is still running.
If you need cron to terminate the previous process, you'll need to modify your script to handle that itself.
You need a locking mechanism to identify that the script is already running.
There are several ways of doing this but you need to be careful to use an atomic method.
I use lock directories as creating a directory is guaranteed to be atomic -
LOCKDIR=/tmp/myproc.lock
if ! mkdir $LOCKDIR >/dev/null 2>&1
then
print -u2 "Processing already running - terminating"
exit 1
fi
trap "rm -rf $LOCKDIR" EXIT
This is a common occurrence. Try adding a check in your script to see if a lockfile already exists. If it does, exit. If not, continue.
Cronjobs are not overrun. They do however have the possibility of overlapping. Unless your script explicitly kills any pre-existing process, it shouldn't be able to stop the previously running script.
However, introducing the concept of lockfiles will save you from all these confusions altogether.

Linux job scheduler launching a script 2 hours after it terminates

I have a script that runs unknown period of time that depends on its input. It can run one hour when little data available, or it can run for 8 hours if much data is to be processed.
I need to run it periodically, particularly 2 hours after previous run was completed.
Is there an utility to do that?
Use 'at' instead of 'cron' and at the end of your script add:
at now +2 hours $*
This means that each occurrence is chained - so if it terminates abnormally the next instance won't be scheduled - but I don't think there's a more robust solution without adding a lot of code/complexity.
I don't like the at solution proposed, so here another solution:
Use cron to launch your every two hours
Upon startup, your application(*) checks if there's a pidfile.
2.1 if it is present, then there may be another instance running: read contents of the file (pid) and see if that pid is the pid of an existing process, a zombie process or something else. If it is the pid of a running, existing process, then exit. If it is the pid of a zombie process then the previous job ended unexpectedly and then you have to delete the pidfile and go to step 3. Otherwise.
After deleting pidfile, you create a new one and put your pid into it. Then proceed to do your job.
*: In order not to add complexity, this application i cited could also be a simple wrapper that spawns your code using exec.
This solution can also be scripted quite easily.
Hope it helps,
SnoopyBBT
If it looks complicated, here is another, dirtier solution:
while true ; do
./your_application
sleep 7200
done
Hope this helps,
SnoopyBBT

How to handle overtime crons

Suppose if i have cron tasks running every minute. And if each time, that task takes more than one minute to run, what will happen. Will the next cron wait for the first cron or will it run without any checks.
I want to run a cron task every minute and I don't over lapping cron tasks like that in case of a long running task/situation.
please help.
It depends on what you run. If it's your own script, you can implement a locking/lock checking mechanism to avoid running duplicates.
But that's not cron's job.
Yes, cron will go ahead and start your 1+ minute-running process every minute until something crashes.
You'll want to put a lock of some sort into your job if you can to basically do this at start-up:
if not get_lock()
print "Another process is running"
exit
This, of course, assumes that you own the code running. If you're running a command that you didn't code, then I'd recommend building a shell wrapper that implements the above pseudocoded logic where get_lock() will see if another process like this one is running.
As others have mentioned, CRON will run your script every minute regardless of whether another instance of your script is still running.
If you want to avoid this and don't fancy implementing your own locking mechanism then you could try using a CRON alternative called The Fat Controller which is a daemon that will continually re-run scripts. You can optionally specify an interval between runs and also optionally specify a maximum execution time so if a script goes AWOL then it can be killed.
There's some use cases and more information on the website:
http://fat-controller.sourceforge.net/

Resources