How to refer GDG generation if generation is created in step 2 and same i refer in step 3 - mainframe

I have faced an issue in GDG generation in production.
New generation is created in STEP 2:
//INP DD DSN=sample.test(+1),
// DISP=(,CATLG,KEEP),
// SPACE=(CYL,(50,20),RLSE),
// DCB=(RECFM=FB,LRECL=1020,BLKSIZE=4080)
The same generation is refered in STEP 3:
//step3 exec PGM=SORT
//SORTIN DD DSN=sample.test(+1),
// DISP=SHR
//SORTOUT DD DSN=xxxx.yyyy,
// DISP=(NEW,CATLG,DELETE),
// UNIT=(SYSDA,9),DCB=(RECFM=FB,LRECL=132),
// SPACE=(CYL,(50,20),RLSE)
I gave like this, but fails with JCL error.
Can anyone help on why this fails?
As per theory once job has completed only. We have to refer with 0th version. In Same job we have to refer with +1 Version. If i changed to 0th version in step 3 then the job ran fine.

You need to specify the message number, IEF…
The default disposition in the INP step is NEW so the message indicates that the dataset already exists - without seeing the other steps in the deck it's hard to help.

Your idea about GDG's is correct. In this case you will get a JCL error if the step creating the GDG fails or is skipped because of COND / IF statement or Step2 does not open the GDG (Non SMS ???), then step3 will get a JCL error because sample.test(+1) does not exist. There are other possibilities that could cause a JCL error (e.g. can not allocate the dataset) but I think this is the most likely. It would be easier if full JCL / Error messages where listed
There are two possible solutions (in most cases choose option 1 or 2)
Add a IEBGENER step1a to create the GDG before step2, this should ensure the GDG is created.
Add a cond to step 3
//step3 exec PGM=SORT,COND=(0,NE)
Change disp to DISP=(,CATLG,CATLG), (if the step failed)
//INP DD DSN=sample.test(+1),
// DISP=(,CATLG,CATLG),
Only choose option 2 if you want the Output if the job fails.
I would change DISP=(,CATLG,KEEP) to one of DISP=(,CATLG), DISP=(,CATLG,DELETE) or DISP=(,CATLG,CATLG). In the old days KEEP alowed you to create an uncatlogued datasets.
Alternately SMS might be an issue.

You are using the wrong GDG index.
You should use (0) index to refer the lastest added dataset to the GDG.
(+1) indicates that a new dataset should be added to the GDG.
More information: IBM Retrieving a Generation Data Set

It's difficult to interpret the exact issue with limited information about which DD statement is in error. I think you have the right idea regarding the use of +1 in your example. I don't believe that is the issue with your error.
I suspect that the error is with the first DD's Abnormal Disposition of KEEP on the //INP DD statement. In effect, You are asking to create a NEW gdg dataset that will be cataloged only if the step 2 executes normally. If the job abends in step 2, you are asking the system for KEEP of a GDG dataset that isn't registered in the system catalog yet. I think DISP=(,CATLG,CATLG) would be a more appropriate coding for the //INP DD statement in this scenario.
Normally you would use KEEP in situations where the dataset already exists and should be retained.
However, if your shop is using SMS-managed datasets, KEEP is treated as CATLG because all SMS managed datasets must be cataloged. If that is the situation, then this response may not apply to your specific situation. It doesn't appear from your example that SMS is a factor here.

Related

How to set skipLimit when using spring batch framework to execute a step concurrently?

I use spring batch to load data from a file to a database.The job contains only one step.I use a ThreadPoolTaskExecutor to execute step concurrently.The step is similar to this one.
public Step MyStep(){
return StepBuilderFactory.get("MyStep")
.chunk(10000)
.reader(flatFileItemWriter)
.writer(jdbcBatchItemWriter)
.faultTolerant()
.skip(NumberFormatException.class)
.skip(FlatFileParseException.class)
.skipLimit(3)
.throttleLImit(10)
.taskExecutor(taskExecutor)
.build();
}
There are 3 "numberformat" errors in my file,so I set skipLimit 3,but I find that when I execute the job,it will start 10 threads and each thread has 3 skips,so I have 3 * 10 = 30 skips in total,while I only need 3.
So the question is will this cause any problems?And is there any other way to skip exactly 3 times while executing a step concurrently?
github issue
Robert Kasanicky opened BATCH-926 and commented
When chunks execute concurrently each chunk works with its own local copy of skipCount (from StepContribution). Given skipLimit=1 and 10 chunks execute concurrenlty we can end up with successful job execution and 10 skips. As the number of concurrent chunks increases the skipLimit becomes more a limit per chunk rather than job.
Dave Syer commented
I vote for documenting the current behaviour.
Robert Kasanicky commented
documented current behavior in the factory bean
However, this seems to be a correct thing for a very old version of spring batch.
I have a different problem in my code, but the skipLimit seems to be aggregated when I use multiple threads. Albeit the job sometimes hangs without properly failing when SkipLimitException is thrown.

Using IEBCOPY to copy members to datasets with wild cards

So we're upgrading SEASOFT Fastpack and have to add members to everyone's ISPF profile to allow the usage of the product's menu.
The ideal JCL we're after is as follows:
//COPYRGHT JOBCARD
//JOBSTEP EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=A
//INDD DD DSNAME=FASTPACK.SRC,
// DISP=SHR,UNIT=SYSDA
//OUTDD DD DSNAME=BFCU.PRODISPF.PROF&SYSNAME..&USERID <==== ? BFCU.PRODISPF.PROF*.*
// DISP=SHR,UNIT=SYSDA
//SYSIN DD *
COPY INDD=OUTDD,OUTDD=OUTDD
COPY INDD=((INDD,R)),OUTDD=OUTDD
/*
Obviously it would be nice if we can make the job dynamically look for all the datasets that match the pattern.
I would suggest writing Rexx or clist code to use the LMDINIT and LMDLIST ISPF services to create a list of datasets matching your pattern and save the list in a dataset. Then write another program to read that list of datasets and write your desired JCL, one step per dataset. Run the Rexx or clist code in ISPF in batch.
You will want to count how many steps you generate, as a job can only have 255 steps.
You can make this as automated as you like, for example you could generate a jobcard, an instream proc containing your IEBCOPY with the OUTDD DSN a parameter, and then each step executes the instream proc with the DSN parameter set to the dataset name. When you get to 255 steps, generate another jobcard, another copy of the instream proc, and continue with generating steps.

JCL should read internal reader than completely submit outer JCL

I have a batch job that has 10 steps in STEP5. I have written an internal JCL and I want after Internal reader step are completed successfully my next step in the parent job which is STEP06 to execute. Could you please give any resolution to this problem.
For what you have described, there are 2 approaches:
Break your process into 3 jobs - steps 1-5 as one job, the second
job consisting of the JCL submitted in sep 5 and the third job
consisting of step 6 (or steps 6-10 - you do not make it clear if
the main JCL has 6 steps and the 'inner' JCL 4 steps, making the 10
steps you mention, or if the main JCL has 10 steps).
The execution of the 3 jobs will need to be serialised somehow.
Simply have the 'inner' JCL as a series of steps in the 'outer' JCL
so that you only have once job with steps that run in order.
The typical approach to this sort of issue would be to use scheduler to handle the 3 part process as 3 jobs the middle one perhaps not submitted by the scheduler but monitored/tracked by it.
With a good scheduler setup, there is a good chance that even if the jobs were run on different machines or even different types of machines that they could be tracked.
To have a single job delayed halfway through is possible but would require some sort of program to run in a loop (waiting so as not to use excessive cpu) checking for an event (a dataset being created or deleted, the job could itself could be checked or numerous other ways).
Another way could be to have part 1 submit a job to do part 2 and that job to then submit another as part 3.
Yet another way, perhaps frowned upon depedning upon it's importance, would be to have three jobs the first part submitted to run, the third part submitted but on hold. The first submits the second which releases the third.
Of course there is also the possibility that one job could do everthing as a single job.

Sorting a variable-length record, blocked, dataset

I have tried to sort a VB file.
File data is:
00000000002 AAA
00000000001
00000000003 BBB
00000000004 CCC
00000000005
The JCL I am using for sorting is below:
//STEP1 EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SORTWK01 DD UNIT=SYSTF,SPACE=(01000,(006980,001425),,,ROUND)
//SYSIN DD *
SORT FIELDS=(17,3,CH,A)
/*
//SORTIN DD DSN=TEST.INPUT.FILE1,
// DISP=SHR
//SORTOUT DD DSN=TEST.OUTPUT.FILE2,
// DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=VB,LRECL=80,BLKSIZE=0),
// UNIT=SYSSF,
// SPACE=(CYL,(5,5),RLSE)
This JCL fails with VB file, but works fine with FB file. However, if I add following sort card, it works fine with VB file as well.
SORT FIELDS=(17,3,CH,A)
OPTION VLSHRT
I am trying to find the reason, why this works for FB, but not for VB.
For an FB dataset, all records are the same length (same as the LRECL of the dataset).
For a VB dataset, any record can be a theoretical one through (LRECL-4). Realistically, the shortest record depends on the context of the data in the dataset, the longest record possible should be the same as the LRECL.
What this means for SORT is that any given field which is referenced for a record in a VB dataset may or may not be there, but the won't actually be known until run-time.
Ideally, if you are SORTing the data, you'd expect on the control-fields specified to exist for all records. However, sometimes there is a different requirement.
What DFSORT does with short variable-length records ("short" in this case means ending before the control-fields specified) is controlled by the parameters VLSCMP and VLSHRT.
VLSCMP is used to control the behaviour for short records with INCLUDE/OMIT statement.
VLSHRT is described thus is the DFSORT Application Programming Guide:
Temporarily overrides the VLSHRT installation option, which specifies whether
DFSORT is to continue processing if a "short" variable-length SORT/MERGE
control field, INCLUDE/OMIT compare field, or SUM summary field is found.
For more information, see the discussion of the VLSHRT and NOVLSHRT
options in “OPTION Control Statement” on page 173.
VLSHRT
specifies that DFSORT continues processing if a short control field or
compare field is found.
NOVLSHRT
specifies that DFSORT terminates if a short control field or compare
Also note, you can't use the same start location if your data is on a VB dataset. On a variable-length record, the data starts at position five, because the first four bytes are occupied by the Record Descriptor Word (RDW) (in this context Word just means four bytes). So for a variable-length record you need to add four to all the start-positions for all your fields.
field is found.
This also means that when you specify an LRECL of 80 for a VB, as in your example, each record can only actually contain a maximum of 76 bytes of data (76 + length of RDW = 80).
Also note that it is not a good thing to put the LRECL and RECFM on the output dataset from EXEC PGM=SORT or EXEC PGM=ICETOOL. SORT/ICETOOL will specfiy the LRECL and RECFM accurately. If you have them also in the JCL you have a second place to maintain them.
Bill is correct, but I will try and give a simpler answer.
In the example given, you have 2 records:
00000000001
and
00000000005
That do not have a sort key. When you copy them to a fixed width file, they get padded out with spaces (x'40') or hex-zero's (depending on how you copy the file). The records now have a sort key of spaces (or Hex zero's) i.e they become
00000000001____
00000000005____ where _ represents a space (x'40') or Hex zero (x'00')
The FB sort will now work, while the VB sort will fail because there are records without a sort key.
The VLSHRT parameter tells the sort program to treat missing sort keys as Hex-Zeros and the sort will now work.
Have look at bill's answer it has quite a bit of useful information on FB and VB files.

How many output file can be generated using outfil of sort utility?

In a simple JCL script I am trying to generate some similar datasets according to some conditions using SORT. In this way how many output files can I generate in a single JCL?
For anyone who wants to try it, here's a DFSORT step which will generate 1629 DD statements (SYSOUT=*) and 1629 OUTFIL statements.
Run the step.
Take a SORT step which just has OPTION COPY and using the ISPF editor, copy the dataset from SYSOUTS into the JCL part, and the dataset from OUTFILS after the OPTION COPY.
Submit your job. If it fails with n-number of IEF649I EXCESSIVE NUMBER OF DD STATEMENTS then delete the last n-number DD statements and the last n-number OUTFIL statements. If it works, you can try higher numbers of DD statements (change both the 1629s) especially if your TIOT-size is greater than 32K. With a 64K TIOT you'll probably be able to get a little over twice this number.
Don't be surprised if it takes some time (it won't be too long), as it is opening, writing a record to, and closing all those files.
//LOTSOFOF EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//OUTFILS DD DISP=(,CATLG),UNIT=SYSDA,SPACE=(TRK,2),
// DSN=your dataset name number 1 here
//SYSOUTS DD DISP=(,CATLG),UNIT=SYSDA,SPACE=(TRK,2),
// DSN=your dataset name number 2 here
//SYSIN DD *
OPTION COPY
OUTFIL REPEAT=1629,
FNAMES=OUTFILS,
BUILD=(C' OUTFIL FNAMES=F',
SEQNUM,4,ZD,
80X)
OUTFIL REPEAT=1629,
FNAMES=SYSOUTS,
BUILD=(C'//F',
SEQNUM,4,ZD,
C' DD SYSOUT=*',
80X)
//SORTIN DD *
ONE LINE NEEDED, CONTENT NOT IMPORTANT
There is an XTIOT (Extended TIOT), but that is not for QSAM, that has specialised uses like for DB2.
Well, the answer is known anyway.
There are effectively two limits to the number of OUTFIL statements you can have.
The first is how many DDnames your site allows in a single jobstep. Ask your seniors or a Sysprog how big the TIOT is. If it is 32K, you'll have around 1,600 available. If 64K, twice that.
The second limit is the number of SORT control cards you have in the step and their complexity. You can still get lots.
Either way, I suspect you'll have easily more than enough OUTFIL statements for your task.
How many do you want?
For doubters, try this link: https://groups.google.com/forum/#!msg/bit.listserv.ibm-main/km3VNDp0SQQ/Zmh161dcSKcJ
The relevant quote from Kolusu is:
DFSORT was able to handle writing up to 999 members into a PDSE
simultaneously. Beyond that I get IEF649I EXCESSIVE NUMBER OF DD
STATEMENTS
Indicating that DFSORT was still happy in this case, and that z/OS was not. Kolusu is a developer of DFSORT.
If there are more denials, I can find more quotes, including from Frank Yaeger, inventor, designer and developer of the modern DFSORT for many, many, years (now retired).

Resources