can we compare time fields in SORT card of JCL - mainframe

Suppose I have 12 position of a record holding time field of 8 bytes. Can I compare it with current timestamp? Can I do arithmetic operations on that filed like adding an hour or subtracting few minutes etc. Your responses will be highly appreciated. Thanks!
Addendum.. for better understanding:
I need your help in getting time fields compared while writing records from output.
For instance I have 12th position of the file hloding timestamp of 8bytes. I want to write to output when the timestamp on the record is less than or equal to the current timestamp by an hour. In the process of attaining this i was stuck at below:
INCLUDE COND=(12,8,??,GE,&TIME1-1),
what could be the data representation (in the place of ?? for this.)
Prior to all can we achieve this using SORT? If so, please gimme SORT card (amend my card if feasible otherwise gimme your version). And also please share the material/repo on time and date comparisons and better handling. Thanks in advance for help.
Regards,
Raja.

I think see what you are trying to do, but have doubts as to whether it will work. These are my thoughts:
I have only ever seen the &TIME1(c) character string used for output. For example: OUTREC BUILD(1,11,12,&TIME1(:))
will place the current time in HH:MM:SS format into the output record starting at position 12. To the
best of my knowedge, TIME cannot be used in an ICETOOL/DFSORT COND statement as you have indicated in your question.
Even if TIME were supported within COND statements, the +/- operators are not supported as you might
have seen with DATE (eg. DATE1+1 to get current date plus 1 day). Adding some constant to a TIME
is not supported.
Have you given any consideration to what would happen if your job were to run a few minutes
before midnight? Adding an hour to the time causes a roll over to morning of the next day. At that point you
need to take the date into condideration in the COND.
Something that might work: Add a pre-step to run a REXX, or some other, prgram. Let this program
generate all or part of the
INCLUDE statements used in a subsequent ICETOOL step. Here is an example REXX procedure that
will create an INCLUDE statement similar to the one given in your question. The record is
written to the file allocated to DD CNTLREC:
/* REXX */
PULL DELTA /* Number of hours to add to current time */
PARSE VALUE TIME('N') WITH HH ':' MM ':' SS /* current time */
HH = LEFT((HH + DELTA) // 24, 2, '0') /* add DELTA, check rollover */
QUEUE " INCLUDE COND=(12,8,CH,GE,C'"HH":"MM":"SS"'),"
EXECIO * DISKR CNTLREC(FINIS
EXIT
Assign this file to the appropriate ICETOOL control statement DD and it should work for you.
Warning: This example does not deal with adjustments that might be
required to the COND parameters in the event of a rollover midnight.
Note: If you stored the above REXX procedure in a PDS as: "MY.REXX(FOO)", your pre-step
JCL would look something like:
//RUNREXX EXEC PGM=IKJEFT01
//SYSEXEC DD DSN=MY.REXX,DISP=SHR
//SYSTSPRT DD SYSOUT=A
//SYSTSIN DD *
%FOO
1
/*
//
The '1' following %FOO is the DELTA number of hours referenced in the procedure.

If you DFSORT is fairly up-to-date, October 2010, DATE5 will have the equivalent of DATE4 but including microseconds, so like a DB2 "timestamp".
OPTION COPY
INREC OVERLAY=(1:DATE5)
gives
2013-04-08-19.29.41.261377

Related

Find documents in MongoDB with non-typical limit

I have a problem, but don't have idea how to resolve it.
I've got PointValues collection in MongoDB.
PointValue schema has 3 parameters:
dataPoint (ref to DataPoint schema)
value (Number)
time (Date)
There is one pointValue for every hour (24 per day).
I have API method to get PointValues for specified DataPoint and time range. Problem is I need to limit it to max 1000 points. Typical limit(1000) method isn't good way, because I need point for whole, specified time range, with time step depends on specified time range and point values count.
So... for example:
Request data for 1 year = 1 * 365 * 24 = 8760
It should return 1000 values but approx 1 value per (24 / (1000 / 365)) = ~9 hours
I don't have idea what method i should use to filter that data in MongoDB.
Thanks for help.
Sampling exactly like that on the database would be quite hard to do and likely not very performant. But an option which gives you a similar result would be to use an aggregation pipeline which $group's the $first best value by $year, $dayOfYear, and $hour (and $minute and $second if you need smaller intervals). That way you can sample values by time steps, but your choices of step lengths are limited to what you have date-operators for. So "hourly" samples is easy, but "9-hourly" samples gets complicated. When this query is performance-critical and frequent, you might want to consider to create additional collections with daily, hourly, minutely etc. DataPoints so you don't need to perform that aggregation on every request.
But your documents are quite lightweight due to the actual payload being in a different collection. So you might consider to get all the results in the requested time range and then do the skipping on the application layer. You might want to consider combining this with the above described aggregation to pre-reduce the dataset. So you could first use an aggregation-pipeline to get hourly results into the application and then skip through the result set in steps of 9 documents. Whether or not this makes sense depends on how many documents you expect.
Also remember to create a sorted index on the time-field.

Sorting a variable-length record, blocked, dataset

I have tried to sort a VB file.
File data is:
00000000002 AAA
00000000001
00000000003 BBB
00000000004 CCC
00000000005
The JCL I am using for sorting is below:
//STEP1 EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SORTWK01 DD UNIT=SYSTF,SPACE=(01000,(006980,001425),,,ROUND)
//SYSIN DD *
SORT FIELDS=(17,3,CH,A)
/*
//SORTIN DD DSN=TEST.INPUT.FILE1,
// DISP=SHR
//SORTOUT DD DSN=TEST.OUTPUT.FILE2,
// DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=VB,LRECL=80,BLKSIZE=0),
// UNIT=SYSSF,
// SPACE=(CYL,(5,5),RLSE)
This JCL fails with VB file, but works fine with FB file. However, if I add following sort card, it works fine with VB file as well.
SORT FIELDS=(17,3,CH,A)
OPTION VLSHRT
I am trying to find the reason, why this works for FB, but not for VB.
For an FB dataset, all records are the same length (same as the LRECL of the dataset).
For a VB dataset, any record can be a theoretical one through (LRECL-4). Realistically, the shortest record depends on the context of the data in the dataset, the longest record possible should be the same as the LRECL.
What this means for SORT is that any given field which is referenced for a record in a VB dataset may or may not be there, but the won't actually be known until run-time.
Ideally, if you are SORTing the data, you'd expect on the control-fields specified to exist for all records. However, sometimes there is a different requirement.
What DFSORT does with short variable-length records ("short" in this case means ending before the control-fields specified) is controlled by the parameters VLSCMP and VLSHRT.
VLSCMP is used to control the behaviour for short records with INCLUDE/OMIT statement.
VLSHRT is described thus is the DFSORT Application Programming Guide:
Temporarily overrides the VLSHRT installation option, which specifies whether
DFSORT is to continue processing if a "short" variable-length SORT/MERGE
control field, INCLUDE/OMIT compare field, or SUM summary field is found.
For more information, see the discussion of the VLSHRT and NOVLSHRT
options in “OPTION Control Statement” on page 173.
VLSHRT
specifies that DFSORT continues processing if a short control field or
compare field is found.
NOVLSHRT
specifies that DFSORT terminates if a short control field or compare
Also note, you can't use the same start location if your data is on a VB dataset. On a variable-length record, the data starts at position five, because the first four bytes are occupied by the Record Descriptor Word (RDW) (in this context Word just means four bytes). So for a variable-length record you need to add four to all the start-positions for all your fields.
field is found.
This also means that when you specify an LRECL of 80 for a VB, as in your example, each record can only actually contain a maximum of 76 bytes of data (76 + length of RDW = 80).
Also note that it is not a good thing to put the LRECL and RECFM on the output dataset from EXEC PGM=SORT or EXEC PGM=ICETOOL. SORT/ICETOOL will specfiy the LRECL and RECFM accurately. If you have them also in the JCL you have a second place to maintain them.
Bill is correct, but I will try and give a simpler answer.
In the example given, you have 2 records:
00000000001
and
00000000005
That do not have a sort key. When you copy them to a fixed width file, they get padded out with spaces (x'40') or hex-zero's (depending on how you copy the file). The records now have a sort key of spaces (or Hex zero's) i.e they become
00000000001____
00000000005____ where _ represents a space (x'40') or Hex zero (x'00')
The FB sort will now work, while the VB sort will fail because there are records without a sort key.
The VLSHRT parameter tells the sort program to treat missing sort keys as Hex-Zeros and the sort will now work.
Have look at bill's answer it has quite a bit of useful information on FB and VB files.

How many output file can be generated using outfil of sort utility?

In a simple JCL script I am trying to generate some similar datasets according to some conditions using SORT. In this way how many output files can I generate in a single JCL?
For anyone who wants to try it, here's a DFSORT step which will generate 1629 DD statements (SYSOUT=*) and 1629 OUTFIL statements.
Run the step.
Take a SORT step which just has OPTION COPY and using the ISPF editor, copy the dataset from SYSOUTS into the JCL part, and the dataset from OUTFILS after the OPTION COPY.
Submit your job. If it fails with n-number of IEF649I EXCESSIVE NUMBER OF DD STATEMENTS then delete the last n-number DD statements and the last n-number OUTFIL statements. If it works, you can try higher numbers of DD statements (change both the 1629s) especially if your TIOT-size is greater than 32K. With a 64K TIOT you'll probably be able to get a little over twice this number.
Don't be surprised if it takes some time (it won't be too long), as it is opening, writing a record to, and closing all those files.
//LOTSOFOF EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//OUTFILS DD DISP=(,CATLG),UNIT=SYSDA,SPACE=(TRK,2),
// DSN=your dataset name number 1 here
//SYSOUTS DD DISP=(,CATLG),UNIT=SYSDA,SPACE=(TRK,2),
// DSN=your dataset name number 2 here
//SYSIN DD *
OPTION COPY
OUTFIL REPEAT=1629,
FNAMES=OUTFILS,
BUILD=(C' OUTFIL FNAMES=F',
SEQNUM,4,ZD,
80X)
OUTFIL REPEAT=1629,
FNAMES=SYSOUTS,
BUILD=(C'//F',
SEQNUM,4,ZD,
C' DD SYSOUT=*',
80X)
//SORTIN DD *
ONE LINE NEEDED, CONTENT NOT IMPORTANT
There is an XTIOT (Extended TIOT), but that is not for QSAM, that has specialised uses like for DB2.
Well, the answer is known anyway.
There are effectively two limits to the number of OUTFIL statements you can have.
The first is how many DDnames your site allows in a single jobstep. Ask your seniors or a Sysprog how big the TIOT is. If it is 32K, you'll have around 1,600 available. If 64K, twice that.
The second limit is the number of SORT control cards you have in the step and their complexity. You can still get lots.
Either way, I suspect you'll have easily more than enough OUTFIL statements for your task.
How many do you want?
For doubters, try this link: https://groups.google.com/forum/#!msg/bit.listserv.ibm-main/km3VNDp0SQQ/Zmh161dcSKcJ
The relevant quote from Kolusu is:
DFSORT was able to handle writing up to 999 members into a PDSE
simultaneously. Beyond that I get IEF649I EXCESSIVE NUMBER OF DD
STATEMENTS
Indicating that DFSORT was still happy in this case, and that z/OS was not. Kolusu is a developer of DFSORT.
If there are more denials, I can find more quotes, including from Frank Yaeger, inventor, designer and developer of the modern DFSORT for many, many, years (now retired).

Not able to copy alphabet letters from physical sequential file to a KSDS cluster

I have created a sequential file with some records. I have to copy them to a KSDS cluster. So i wrote a JCL for it.
When i give numerals in my sequential file it is working but when i give english alphabet letters it is not working.
why is that??
THIS IS MY CODE FOR CREATING KSDS Cluster
//TRC186H JOB (TRC,TRC,TRC186,D2,DT99X),CLASS=A,
// MSGLEVEL=(1,1),MSGCLASS=C,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE CLUSTER -
(NAME(TRC186.VSAM.CASE.CLUSTER) -
TRACKS(2,2) -
CONTROLINTERVALSIZE(4096) -
INDEXED -
KEYS(6,1) -
FREESPACE(10,10)) -
DATA -
(NAME(TRC186.CASE.DATA) -
RECORDSIZE(180 180)) -
INDEX -
(NAME(TRC186.CASE.INDEX) -
CONTROLINTERVALSIZE(4096))
/*
And this is my code for copying from sequential file to KSDS cluster
//TRC186A JOB (TRG),CLASS=A,MSGLEVEL=(1,1),MSGCLASS=A,
// NOTIFY=&SYSUID
//STEP1 EXEC PGM=IDCAMS
//INPUTDD DD DSN=TRC186.VSAM.INPUTPS,DISP=OLD
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
REPRO -
INFILE(INPUTDD) -
OUTDATASET(TRC186.VSAM.CASE.CLUSTER)
/*
The inputs that i have given are
123456
234567
345678
456789
567891
they are easily being copied butwhen i give english alphabet letters like-
abcdefg
cdhert
kjsdfg
qwerty
kjhgfd
This is not being copied to cluster.
please explain why?
Your KEYS in the definition of your KSDS specify 6,1. You will want to check if that is what you want.
When loading a KSDS with REPRO the data must be in key sequence already. The numeric data you have shown is coincidentally in key sequence, the alphabetic data not.
If you precede your IDCAMS step with a SORT step, then you should be clean. However, review the way VSAM wants the key, and compare with the way SORT wants the key. That's the way it is.
A KEY definition for a KSDS on an IDCAMS DEFINE has a particular format. First you specify the length, which you did correctly, and then you specify the offset. What the offset means is "bytes from the starting point of the record". So, offset zero is byte one (or column one), offset one (which you specified) is byte two of the record, meaning that your numeric example is still in order (a bit of a fluke) but your alphabetics are not, they need to be in order on the second letter with the particular DEFINE you used.

cron expression parsing into java date

my database having 10 18 16 ? * SUN,MON,WED,FRI * cron expression then how to convert into Java date.
how to comparing with present day time.
and one more is how to compare to cron expressions i.e. 10 18 16 ? * SUN,MON,WED,FRI * and 0 30 9 30 * ?
please explain the sample code using quartz or spring scheduling.
Please use:
import org.springframework.scheduling.support.CronSequenceGenerator;
final String cronExpression = "0 45 23 * * *";
final CronSequenceGenerator generator = new CronSequenceGenerator(cronExpression);
final Date nextExecutionDate = generator.next(new Date());
...and then I suggest use Joda DateTime for date comparison.
I wrote a small class for handling cron expressions, available here:
https://github.com/frode-carlsen/cron
Based on Joda-time, but should be fairly easy to port to Java8 time api. This also makes it possible to embed in unit tests, do simulations etc by adjusting the DateTime offset in Joda-time.
It also has pretty good test coverage (was done as TDD Kata).
Update
Now supports java 8 time api as well thanks to a contribution from github user https://github.com/zemiak. In both cases, the expression parser is a single, tiny class which can easily be copied into your own project.
You may want to look into the org.quartz.CronExpression class in the Quartz API.
Please note that you cannot simply compare a cron expression with a date because the cron expression (typically) represents a sequence of various dates. In any case, you may find the following methods useful:
public boolean isSatisfiedBy(Date date)
public Date getNextValidTimeAfter(Date date)
As for comparing two cron expressions, what would you like to compare? The only thing that IMO makes sense to compare are the next 'trigger' dates, i.e. dates obtained from getNextValidTimeAfter([some reference date]) calls.
Perhaps you can check cron-utils
It has some utils to get next/previous execution given certain date, ex.: now. Works with JodaTime, but you could retrieve a JavaDate from there.
The library is scheduler agnostic: you just provide a string with a cron expression.
Is compatible with JDK6.

Resources