How to cancel a JCL job(Mainframe) in SDSF???(OZA1) error - mainframe

I received a JCL Error after submitting a job.
20.46.44 JOB08763 $HASP165 WPR062M ENDED AT OZA1 - JCL ERROR CN(INTERNAL).
and in SDSF I am seeing this
How can I fix this (Cancel the job)? What is the reason for this error?
Thanks in advance.

If you are authorized to do so, you can cancel a job in SDSF by putting a C in the "N P" column and pressing the Enter key. But, that's your TSO session (the JobID starts with TSU) and you probably don't want to cancel it. The message you received indicated the job you submitted had a JCL error and ended, so there's no need to cancel it because it's no longer running.

The job shown in the screen shot is your current TSO session; you don't want to cancel this, do you? (BTW, please post text instead of images whenever possible).
The jobname of the one in the screen shot is WPR062 and the jobid is TSU08747. The TSU prefix in the jobid tells you its a TSO session.
The job (not TSO session) in error which gave you this message:
20.46.44 JOB08763 $HASP165 WPR062M ENDED AT OZA1 - JCL ERROR CN(INTERNAL)
has a jobname WPR026M with jobid JOB08763. The JOB prefix tells you its a batch job.
You need to look at the job's output in SDSD to find out what caused the JCL error.
For completeness:
Started task have a jobid prefix of STC.
If your system is configured to allow more than 99'999 active jobids, the prefixes will become single character, i.e. T for TSO sessions, J for batch jobs, and S for started tasks.

As already stated the SDSF output you are looking at is showing your TSO UserID. That is long running and is not the job that is in error.
According to the error message
20.46.44 JOB08763 $HASP165 WPR062M ENDED AT OZA1 - JCL ERROR CN(INTERNAL)
The actual jobname is WPR062M. To investigate the issue I suggest that you use the command PREFIX WMPR062* and then use the H command. The output you are looking for is in the Held queue.
Investigate that job by putting an S in the CmdLine (note, I don't have SDSF on my system but the command column is located on the left side of the screen.
In that job will be the reason for the JCLerror.

Related

Queue SLURM jobs to run X minutes after each other

I have been trying to search around for an example of how to use the following option for job dependencies, -d, --dependency=<dependency_list>.
In the documentation, the syntax is shown to be after:job_id[[+time][:jobid[+time]...]]
But I am unable to find any examples of this, and to be honest I find the presentation of the syntax confusing.
I have tried sbatch --dependency=after:123456[+5] myjob.slurm and sbatch --dependency=after:123456+5 myjob.slurm, but this yields the error
sbatch: error: Batch job submission failed: Job dependency problem.
How can I add a dependency to Job B so that it starts X minutes after Job A starts?
The square brackets [...] indicate an optional parameter value and should not appear in the actual parameter value. Try with
sbatch --dependency=after:123456+5 myjob.slurm
With guidance from damienfrancois to exclude the brackets, I tried the following
sbatch --dependency=after:123456:+5 myjob.slurm
Which seems to work beautifully, listing it in the queue as dependent.
EDIT: This is for version 19.05.07

Cannot get SDSF to respond to TSO or batch commands

I am trying to write a JCL Job Step that will retrieve the JESMSGLG, JESJCL, and JESLOG datasets of the active (this) job. The idea here is that I need to collect the log (from the beginning of the job to now) and record it in a data set before it ends execution.
So I have:
// EXEC PGM=SDSF
//MYOUT DD SYSOUT=* (to changed to a dataset in the future)
//ISFOUT DD SYSOUT=*
//ISFIN DD *
SET CONSOLE BATCH
PREFIX *
OWNER myid
DA OJOB
++S
PRINT FILE MYOUT
FIND JESMSGLG FIRST
++X
FIND JESJCL FIRST
++X
FIND JESLOG FIRST
++X
PRINT CLOSE
When I run the job all I get is CC=0000 and a printout of the SDSF Primary panel in IFSOUT.
If I try this under TSO with the SDSF command, again, all I get is the primary panel. If I enter any command (even an invalid one) it just seems to take the command and silently ignore it.
I can do this under ISPF just fine.
Any ideas as to what to look for to see what I am doing wrong or missing? Its pretty clear to me that this may well be a setup/invocation/security issue but I don't know how to debug it when all I seem to get is CC=0000.
Yup, that was it! I added PARM='++24,80' to the // EXEC PGM=SDSF and it now works. I'm not entirely sure why but it may have been a local installation error of SDSF.
It turns out the the commands I listed above are not quite right, but that's not relevant to this question.
Thank you, Kevin, for your time and interest.

How to order control m job using REXX? like Control m utility CTMAPI

I have to order few jobs in control m from different scheduling tables. this is manual task so i want to automate it using rexx.
I found below in 'Order or Force under Batch, REXX or CLIST' section of 'CONTROL M USERGUIDE'
EXEC CTMAPI PARM=‘ORDER variable’
I could not find syntax to call CMTAPI using rexx.
ADDRESS 'LINKMVS' is the equivalent of // EXEC PGM=something,PARM='whatever' in REXX. I don't know what the variable is supposed to be, but since this is Control-M, I am going to assume job name. A very simple example:
say 'Enter name of job'
pull jobname
parmvar = 'ORDER' jobname
`ADDRESS 'LINKMVS' 'CTMAPI parmvar'
Please note that for LINKMVS, the variable name goes inside the string passed. The LINKMVS environment substitutes the variable automatically. For example, if I entered MYJOB to the prompt, LINKMVS will build a PARM string of `ORDER MYJOB'. This is the exact equivalent of
// EXEC PGM=CTMAPI,PARM='ORDER MYJOB'
This IBM® Knowledge Center page for the z/OS 2.3 TSO/E REXX Reference manual shows several examples of calling a program in the same manner as // EXEC PGM=,PARM= (item 1). Items 5 through 9 show different ways of using ADDRESS 'LINKMVS'; note how variables are treated in each example.
After suggestions from NicC, zarchasmpgmr and few research, finally i am able to order job with CTMJOB utility. I searched for the loadlib and called TSO using REXX.
/*****REXX*******/
ADDRESS TSO
"CALL 'MY.IN.LOAD(CTMJOB)'
' ORDER DSN=MY.SCHED.LIB TABLE=SCHDTBL,
JOB=JOBNAME,DATE=DATE'"
EXIT
Details found in INCONTROL for ZOS utilities guide. This document was very useful.
http://documents.bmc.com/supportu/952/56/64/195664/195664.pdf

Is it possible to change a job ID to something human-readable?

I'd like to send myself a text when a job is finished. I understand how to change the job name so that the .o and .e files have the appropriate name. But I'm not sure if there's a way to change the job ID from a string of numbers to a specified key so I know which job it is. I usually have a lot of different jobs going at once, so it's difficult to remember all the different job ID numbers. Is there a way in the .pbs script to change the job ID so that when I get the message I can see which job it is rather than just a string of numbers?
If you are using Torque and add the -N flag, then you can add a name to the job. It will still use the numeric portion of the job id as part of the output and error filenames, but this allows you to add something to help you distinguish among your jobs. For example:
$ echo ls | qsub -N whatevernameyouplease

Puppet : How to loop to check the status of an exec command

In one of my Puppet manifests, I am running the exec command to run a job on a remote server and dump the output into a file. The file contents are in json and it contains a couple of fields. One field is the status, which decides whether the job is complete and another is the jobid, which provides the id of the job. If the status is complete, I use the jobid to query the server for more information. If the status is not complete, I need to keep looping until the job is complete.
I realize that exec has "try_sleep" but since I need to parse the json file after the exec, I don't believe I can use it.
How do I go about solving this one ?
As requested by Alex, adding the sequence below
Use exec command to execute some statement > /tmp/phase1.json
Parse /tmp/phase1.json and extract fields status and jobid
If field status is not complete, keep looping until it completes
If field status is complete, take the jobid and perform further processing

Resources