JCL error to create partitioned data sets - mainframe

Here is my JCL command
someone please help me why i got this error message, i got so frustated

You need a blank after the JOB keyword.
//TUTOR001 JOB (123),.....
Because the submit program did not find a JOB card it generates a default JOB statement for you.

There might be a few issues here, depending on what you're trying to do...
First is the issue mentioned by Fritz - you need a space after "JOB". The TSO submit command parses the JCL you submit, and if it thinks there's no JOB statement, it automatically generates one for you based on the information associated with your TSO session. You can see in the JCL that this is the case.
A little tidbit of information here is that if you're happy with the JOB statement generated by SUBMIT, then you don't need to include one in your JCL...no reason your JCL couldn't just start with the // EXEC PGM=IEFBR14 line. Sometimes this is done so that different users can submit the same JCL without needing to alter the JOB statement information.
Second, your question says that you're trying to create a partitioned dataset, but what you have coded is a sequential file. If you really mean to create a PDS, then you'll need to make two simple changes:
Change the DSORG from PS (sequential) to PO (partitioned)
Add a directory block count to the SPACE...you have (1,1) - that's one track primary allocation, and one track secondary allocation. This would need to have a third number for the number of directory blocks to allocate, such as (1,1,1). If you don't know how many to specify, a good rule of thumb is that you can have about four PDS members for every directory block.
One final comment is the RLSE...since IEFBR14 doesn't actually open the dataset you just created, RLSE doesn't really do what you expect. The typical use of RLSE is for programs that create files of varying sizes...you tend to set the allocation to the largest you expect, and RLSE trims back to the nearest extent should you write less.

Related

Lock CustomRecord Serials Table

We have a Fulfillment script in 1.0 that pulls a Serial number from the custom record based on SKU and other parameters. There is a seach that is created based on SKU and the fist available record is used. One of the criteria for search is that thee is no end user associated with the key.
We are working on converting the script to 2.0. What I am unable to figure out is, if the script(say the above functionality is put into Map function for a MR script) will run on multiple queues/instances, does that mean that there is a potential chance that 2 instance might hit the same entry of the Custom record? What is a workaround to ensure that X instances of Map function dont end us using the same SN/Key? The way this could happen in 2.0 would be that 2 instance of Map make a search request on Custom record at same time and get the same results since the first Map has not completed processing and marked the key as used(updating the end user information on key).
Is there a better way to accomplish this in 2.0 or do I need to go about creating another custom record that script would have to read to be able to pull key off of. Also is there a wait I can implement if the table is locked?
Thx
Probably the best thing to do here would be to break your assignment process into two parts or restructure it so you end up with a Scheduled script that you give an explicit queue. That way your access to Serial Numbers will be serialized and no extra work would need to be done by you. If you need hint on processing large batches with SS2 see https://github.com/BKnights/KotN-Netsuite-2 for a utility script that you can require for large batch processing.
If that's not possible then what I have done is the following:
Create another custom record called "Lock Table". It must have at least an id and a text field. Create one record and note its internal id. If you leave it with a name column then give it a name that reflects its purpose.
When you want to pull a serial number you:
read from Lock Table with a lookup field function. If it's not 0 then do a wait*.
If it's 0 then generate a random integer from 0 to MAX_SAFE_INTEGER.
try to write that to the "Lock Table" with a submit field function. Then read that back right away. If it contains your random number then you have the lock. If it doesn't then wait*.
If you have the lock then go ahead and assign the serial number. Release the lock by writing back a 0.
wait:
this is tough in NS. Since I am not expecting the s/n assignment to take much time I've sometimes initiated a wait as simply looping through what I hope is a cpu intensive task that has no governance cost until some time has elapsed.

Referring to Dataset Allocated in REXX

We have a REXX program which creates a dataset LOG.DYYMMDD.THHMMSS.OUT saves in DDNAME LOGNM.
We call the REXX PROGRAM from JCL using IKJEFT1B utility.
How do I use this dataset for further processing in JCL. I mean how I refer it in the JCL since the dataset name is dynamically created.
Are you creating the dataset in the Rexx program using TSO Allocate or TSO Copy command
or similar TSO commands ???
If you are then you have a problem, you will not be able to reference
the dataset saftely in following steps (There are methods that will work in some versions of JES).
I would suggest you recode the rexx and allocate data set in the JCL.
//LOGNM DD DSN=LOG.DYYMMDD.THHMMSS.OUT,DISP=(NEW,CATLG), ....
If you alread allocate the dataset using JCL i.e with a DD statement like
//LOGNM DD DSN=LOG.DYYMMDD.THHMMSS.OUT,DISP=(NEW,CATLG), ....
or
//LOGNM DD DSN=LOG.DYYMMDD.THHMMSS.OUT,DISP=OLD
you should have no problem using the Dataset in following steps.
If you allocate and delete the dataset using JCL i.e with a DD statement like
//LOGNM DD DSN=LOG.DYYMMDD.THHMMSS.OUT,DISP=(NEW,DELETE), ....
then change the DELETE to CATLG (or pass)
Once your dataset is created, you can refer to it in JCL exactly as you would any other dataset. It does not matter that it was created dynamically, as the dataset disposition is included when you create it. It gets processed exactly as though it were created with a JCL DD statement. I'm not aware that there is even an indication that it was created dynamically, once it has been created. It is no different from any other PS dataset.
If cataloged:
//SOMENAME DD DISP=SHR,DSN=LOG.DYYMMDD.THHMMSS.OUT
If not cataloged, catalog it, then see above.
If deleted when closed, don't delete it but catalog it, then see above.
Note: I have assumed that you are creating your dataset in one JOB and accessing it in others. If you are accessing it in the same JOB, take good note of Bruce Martin's answer. Your dataset will be "hidden", from the normal assessment of Disposition processing of the JOB when it is submitted because the dataset is only created after that point, when the JOB is actually running (if it gets as fair as running, it may fail immediately with a "JCL ERROR" without even getting close to running).
Personally I'd do it in separate JOBs, but some people think they are keeping things simple when they are not.

SQLyog Job Agent (SJA) for Linux: how to write something like 'all table but a single one'

I am trying to learn SQLyog Job Agent (SJA).
I am on a Linux machine, and use SJA from within a bash script by a line command: ./sja myschema.xml
I need to sync an almost 100 tables db and its local clone.
Since a single table stores a few config data, which I do not wish to sync, it seems I need to write a myschema.xml where I list all the remaining 99 tables.
Question is: is there a way to write to sync all the table but a single one?
I hope my question is clear. I appreciate your help.
If you are using the latest version of sqlyog:You are given the table below, and the option to generate an xml job file at the end of the database syncronisation wizard reflecting the operation you've opted to perform. This will in effect list the other 99 tables in the xml file itself for you, but it will give you what you are looking for, and I dont think you would be doing anything in particular with an individual table, since you are specifying all tables in a single element.

How do we get the current date in a PS file name qualifier using JCL?

How do we get the current date in a PS file name qualifier using JCL?
Example out put file name: Z000417.BCV.TEST.D120713 (YYMMDD format).
This can be done, but not necessarily in a straightforward manner. The straightforward manner would be to use a system symbol in your JCL. Unfortunately this only works for batch jobs if it has been enabled for the job class on more recent versions of z/OS.
Prior to z/OS v2, IBM's stated reason this didn't work is that your job could be submitted on a machine in London, the JCL could be interpreted on a machine in Sydney, and the job could actually execute on a machine in Chicago. Which date (or time) should be on the dataset? There is no one correct answer, and so we all created our own solutions to the problem that incorporates the answer we believe to be correct for our organization.
If you are able to use system symbols in your batch job JCL, there is a list of valid symbols available to you.
One way to accomplish your goal is to use a job scheduling tool. I am familiar with Control-M, which uses what are called "auto-edit variables." These are special constructs that the product provides. The Control-M solution would be to code your dataset name as
Z000417.BCV.TEST.D%%ODATE.
Some shops implement a scheduled job that creates a member in a shared PDS. The member consists of a list of standard JCL SET statements...
// SET YYMMDD=120713
// SET CCYYMMDD=20120713
// SET MMDDYY=071312
...and so on. This member is created once a day, at midnight, by a job scheduled for that purpose. The job executes a program written in that shop to create these SET statements.
Another answer is you could use ISPF file tailoring in batch to accomplish your goal. This would work because the date would be set in the JCL before the job was submitted. While this will work, I don't recommend it unless you're already familiar with file tailoring and executing ISPF in batch in your shop. I think it's kind of complicated for something this simple to accomplish in other ways outlined in this reply.
You could use a GDG instead of a dataset with a date in its name. If what you're looking for is a unique name, that's what GDGs accomplish (among other things).
The last idea that comes to my mind is to create your dataset with a name not containing the date, then use a Unix System Services script to construct an ALTER command (specifying the NEWNAME parameter) for IDCAMS, then execute IDCAMS to rename your dataset.
If you are loading the jobs using JOBTRAC/CONTROL-M schedulers,
getting the date in required format is possibly easy. The format
could be 'OSYMD', which will be replaced by scheduler on the fly
before it loads the job. It has got many formats to pacify the need.
You can also make use of a JCL utility, which i dont remember exactly but I would. This takes the file name from a SYSIN dataset and makes as the DSN name of the output. The SYSIN dataset can be created in the previous step by using a simple DFSORT &DATE commands. Do lemme know if you need syntax, i prefer to google and handson.

Merge flat files

I am trying to create a JCL for merging flat files using IEBGENER. The number of input files is not constant. Can we do it using IEBGENER?
Can you override IEBGENER SYSUT1 DD when invoking the PROC? Something like:
//EXEC procedure
//procstep.SYSUT1 DD DSN=first.copy.file,DISP=SHR,
// DSN=second.copy.file,DISP=SHR,
// DSN=thrid.copy.file,DISP=SHR
etcetera...
Where procedure is the catalogued procedure and procstep is the IEBGENER step.
When multiple datasets are
given for a single DD statement they are concatenated together. As far
as IEBGENER is concerned they should look like a single input dataset.
That's simple to do when we have input file count is not certain by the time job runs. Get the files from source created as generations of a GDG base and specify the base as input, which takes all the generations created so far... but every time the generations created so far will be considered; to avoid it; create a temp file with data from all generations and delete all of them, so that next time you'll have fresh set of generations to be considered. Am i clear? lemme know if not!
A. If you cannot have GDG's then empty files for the missing ones, in particular you then need to have a limit (10, 20 ?).
B. if you are using Control-M or similar , you can pre-process in the DD statement with an INCLUDE. The job cannot be submitted until the include material is ready, and it must be a separate job. That is to say, dynamically build your JCL from alternative decks based on the number of files.
C. You may have to write a program or CLIST to test for existence and concatenate the files.
Anybody that is using GDG's and reading them all at once should always remember the lastest generations are read first.

Resources