can anyone suggest me how to do numeric data processing in cics? i have googled it already but still my concepts are not cleared. Can Any one share some link or any information related to the topic.
Thanks in advance
You cannot rely on the numeric attribute of a field in a BMS map to restrict input to just numeric digits...
Numeric-only
The effect of this designation depends on the keyboard type of the terminal. On a data entry keyboard, a numeric
shift occurs, so that the operator can key numbers without shifting.
On keyboards equipped with the “numeric lock” special feature, the
keyboard locks if the operator uses any key except one of the digits 0
through 9, a period (decimal point), a dash (minus sign) or the DUP
key. This prevents the operator from keying alphabetic data into the
field, although the receiving program must still inspect the entry to
ensure that it is a number of the form it expects. Without the numeric
lock feature, numeric-only allows any data into the field.
...as documented.
You must write code in your program to verify fields contain expected values.
Related
I'm working with Progress-4GL, appBuilder and procedure editor, release 11.6.
I've just found a CHARACTER type global variable (DEF VAR global_variable AS CHAR NO-UNDO.), containing up to 12901 characters. The variable is only used for passing information within the application, the information will never be stored as one tuple within a table.
The information in that variable seems to be handled well: the content is correct.
Yet, as this URL mentions, the maximum length of a character variable in Progress being 2000 characters, and this makes me worry: I'm afraid that one day, another limit may be crossed and from that moment on, we'll need to rethink the whole idea, and I'd like to be prepared for that day.
Therefore, does anybody know the "next" length limit of a character variable in Progress?
That reference you mention points to SQL limitations.
In the ABL, a CHARACTER variable can hold ~ 32 k
DEFINE VARIABLE c AS CHARACTER NO-UNDO.
ASSIGN c = FILL ("*", 31000) .
MESSAGE LENGTH (c)
VIEW-AS ALERT-BOX INFORMATION BUTTONS OK.
Beyond that you have to use LONGCHAR with it's limitations:
slightly slower
cannot be indexed in temp-tables or database tables.
CHARACTER variables are always stored in the CPINTERNAL codepage. LONGCHAR's can use a different codepage through the FIX-CODEPAGE statement.
I have an input file in this format: (length 20, 10 chars and 10 numerics)
jname1 0000500006
bname1 0000100002
wname1 0000400007
yname1 0000000006
jname1 0000100001
mname1 0000500012
mname2 0000700013
In my jcl I have defined my sysin data as such:
SYSIN DATA *
SORT FIELDS=(1,1,CH,A)
SUM FIELDS=(11,10,FD)
DATAEND
*
It works fine as long as I don't add the sum fields so I'm wondering if I'm using the wrong format for my numerics seeing as I know they start at field 11 and have a length of 10 the format is the only thing that could be wrong.
As you might have already realised the point of this JCL is to just list the values but grouped by the first letter of the name (so for the example data and JCL I have given it would group the numeric for mname1 and mname2 together but leave the other records untouched).
I'm kind of new at this so I was wonder what I need for the format if my numerics are like that in the input file.
If new to DFSORT, get hold of the DFSORT Getting Started guide for your version of DFSORT (http://www-01.ibm.com/support/docview.wss?uid=isg3T7000080).
This takes your through all the basic operations with many examples.
The DFSORT Application Programming Guide describes everything you need to know, in detail. Again with examples. Appendix C of that document contains all the data-types available (note, when you tried to use FD, FD is not valid data-type, so probably a typo). There are Tables throughout the document listing what data-types are available where, if there is a particular limit.
For advanced techniques, consult the DFSORT Smart Tricks publication here: http://www-01.ibm.com/support/docview.wss?uid=isg3T7000094
You need to understand a bit more the way data is stored on a Mainframe as well.
Decimals (which can be "packed-decimal" or "zoned-decimal") do not contain a decimal-point. The decimal-point is implied. In high-level languages you tell the compiler where the decimal-point is (in a fixed position) and the compiler does the alignments for you. In Assembler, you do everything yourself.
Decimals are 100% accurate, as there are machine-instructions which act directly on packed-decimal data giving packed-decimal results.
A field which actually contains a decimal-point, cannot be directly used in arithmetic.
An unsigned field is treated as positive when used in any arithmetic.
The SUM statement supports a limited number of numeric definitions, and you have chosen the correct one. It does not matter that your data is unsigned.
If the format of the output from SUM is not what you want, look at OPTION ZDPRINT (or NOZDPRINT).
If you want further formatting, you can use OUTREC or OUTFIL.
As an option to using SUM, you can use OUTFIL reporting functions (especially, although not limited to, if you want a report). You can use SECTIONS and TRAILER3 with TOT/TOTAL.
Something to watch for with SUM (which is not a problem with the reporting features) is if any given one (or more) of your SUMmed fields exceed the field size. To continue to use SUM if that happens, you need to extend the field in INREC and then get SUM to use the new, sufficient, size.
After some trial and error I finally found it, appearantly the format I needed to use was the ZD format (zoned decimal, signed), so my sysin becomes this:
SYSIN DATA *
SORT FIELDS=(1,1,CH,A)
SUM FIELDS=(11,10,ZD)
DATAEND
*
even though my records don't contain any decimals and they are unsigned, I don't really get it so if someone knows why it's like that please go ahead and explain it to me.
For now the way I'm going to remember it is this: Z = symbol for real (meaning integers so no decimals)
I was wondering whether there was any way of making bash send different codes for key combinations that include the shift key? for instance, (Ctrl+V shows me that) Ctrl+N and Ctrl+Shift+N are interpreted the same (^N). Or is there a terminal that can make the difference? Or can bash me modified so that it does?
A terminal doesn't interact directly with your keyboard; it interacts with a stream of bytes that it receives, which are usually (but not necessarily) generated by your keyboard. For the printable ASCII values, there is an obvious correspondence between the value and a key (or combination) on your keyboard. ASCII 97 is a, ASCII 65 is Shifta, and so on.
However, there are the 32 non-printing control characters from ASCII 0 to ASCII 31, called which because they were intended to control a terminal. In order to enter them, the Control was added to allow you, in combination with the other keys, to generate these codes. A simple scheme was used. Pressing Control-x will generate the control code corresponding to subtracting 64 from x. Since # generates ASCII 64, Control# generates ASCII 0. The same mapping holds true for A through _ (consult your favorite ASCII reference to see the rest of the correspondences).
However, whether or not you need a shift key to generate ASCII 64 through ASCII 95 depends on your keyboard. On my US keyboard layout, only [ and ] can be typed without a shift key. (Remember, it's the uppercase-letter ASCII range we're using here, not the lowercase.) So to simplify, I suspect it was decided that Shift would be ignored in determining which keycode is sent with Control-x. (Note that if for some reason your keyboard had two of the characters between 64 and 95 generated by a key/Shift-key pair, your terminal would need to define an alternate mapping for the associated control character.)
All this is simply(?) to explain why ControlShift-x and Control-x are typically the same. Obviously, your modern operating system can distinguish all kinds of keyboard combinations. But out of the myriad possibilities, only 256 of them can send unique values to a terminal; the rest must necessarily duplicate one or more of the others. To be useful, they need to be configured to send some multiple-byte sequence to the terminal, typically beginning with ASCII 27 (ESC). When terminals receive that byte, they pause for a moment to see if any other bytes are coming after. Keys like function keys, arrow keys, etc. have fairly standard sequences they send, which the terminal interprets in various ways. Other keys (like ControlShiftn in your example) have no agreed-upon meaning, and so your terminal emulator must assign one. Most emulators should allow you to do this, but how they do so is, obviously, program-specific.
There's are two great write-ups on keyboard shortcut customization in bash here:
Bash: call script with customized keyboard shortcuts?
In bash, how do I bind a function key to a command?
iTerm2 allows you to map key combinations like Control+Shift+, (which should represent C-<) to an escape sequence. Emacs translates certain escape sequences to the expected key sequence by default. Therefore, by remapping the desired key combination in iTerm to the appropriate escape sequence, you can get the behavior you want. See this response for specifics.
There's no doubt that a length-value representation of data is useful, but what advantages are there to type-length-value over it?
Of course, using LV requires the representation to be predefined or structured, but that's hardly ever a problem. Actually, I can't think of a good enough case where it wouldn't be defined enough that TLV would be required.
In my case, this is about data interchange/protocols. In every situation, the representation must be known to both parties to be processed, which eliminates the need for type explicitly inserted in the data. Any thoughts on when the type would be useful or necessary?
Edit
I should mention that a generic parser/processor would certainly benefit from the type information, but that's not my case.
The only decent reason I could come up with is for a generic processor of the data, mainly for debugging or direct user presentation. Having the type embedded within the data would allow the processor to handle the data correctly without having to predefine all possible structures.
The below point was mentioned in the wikipedia.
New message elements which are received at an older node can be safely skipped and the rest of the message can be parsed. This is similar to the way that unknown XML tags can be safely skipped;
Example:
Imagine a message to make a telephone call. In a first version of a system this might use two message elements, a "command" and a "phoneNumberToCall":
command_c/4/makeCall_c/phoneNumberToCall_c/8/"722-4246"
Here command_c, makeCall_c and phoneNumberToCall_c are integer constants and 4 and 8 are the lengths of the "value" fields, respectively.
Later (in version 2) a new field containing the calling number could be added:
command_c/4/makeCall_c/callingNumber_c/8/"715-9719"/phoneNumberToCall_c/8/"722-4246"
A version 1 system which received a message from a version 2 system would first read the command_c element and then read an element of type callingNumber_c.
The version 1 system does not understand ;callingNumber_c
so the length field is read (i.e. the first 8) and the system skips forward 8 bytes to read
phoneNumberToCall_c which it understands, and message parsing carries on.
Without the type field, the version 1 parser would not know to skip callingNumber_c and instead call the wrong number and maybe throw an error on the rest of the message. So the type field allows for forward compatibility in a way that omiting it does not.
I am monitoring an application using Zabbix and have defined a custom item which returns a string value. Since my item's values are actually checksums, they will only contain the characters [0-9a-f]. Two mirror copies of my application are running on two servers for the sake of redundancy. I would like to create a trigger which would take the item values from both machines and fire if they are not the same.
For a moment, let's forget about the moment when values change (it's not an atomic operation, so the system may see inconsistent state, which is not a real error, for a short time), since I could work around it by looking at several previous values.
The crux is: how to write a Zabbix trigger expression which could compare for equality the string values of two items (the same item on two mirror hosts, actually)?
Both according to the fine manual and as I confirmed in praxis, the standard operators = and # only work on numeric values, so I can't just write the natural {host1:myitem[param].last(0)} # {host2:myitem[param].last(0)}. Functions such as change() or diff() can only compare values of the same item at different points in time. Functions such as regexp() can only compare the item's value with a constant string/regular expression, not with another item's value. This is very limiting.
I could move the comparison logic into the script which my custom item executes, but it's a bit messy and not elegant, so if at all possible, I would prefer to have this logic inside my Zabbix trigger.
Perhaps despite the limitations listed above, someone can come up with a workaround?
Workaround:
{host1:myitem[param].change(0)} # {host2:myitem[param].change(0)}
When only one of the servers sees a modification since the previously received value, an event is triggered.
From the Zabbix Manual,
change (float, int, str, text, log)
Returns difference between last and previous values.
For strings:
0 - values are equal
1 - values differ
I believe, and am struggling with this EXACT situation to this myself, that the correct way to do this is via calculated items.
You want to create a new ITEM, not trigger (yet!), that performs a calculated comparison on multiple item values (Strings Difference, Numbers within range, etc).
Once you have that item, have the calculation give you a value you can trigger off of. You can use ANY trigger functions in your calculation along with arrhythmic operations.
Now to the issue (which I've submitted a feature request for because this is extremely limiting), most trigger expressions evaluate to a number or a 0/1 bool.
I think I have a solution for my problem, which is that I am tracking a version number from a webpage: e.g. v2.0.1, I believe I can use string connotation and regex in calculated items in order to convert my string values into multiple number values. As these would be a breeze to compare.
But again, this is convoluted and painful.
If you want my advice, have yourself or a dev look at the code for trigger expressions and see if you can submit a patch add one trigger function for simple string comparison. (Difference, Length, Possible conversion to numerical values (using binary and/or hex combinations) etc.)
I'm trying to work on a patch myself, but I don't have time as I have so much monitoring to implement and while zabbix is powerful, it's got several huge flaws. I still believe it's the best monitoring system out there.
Simple answer: Create a UserParameter until someone writes a patch.
You could change your items to return numbers instead of strings. Because your items are checksums that are using only [0-9a-f] characters, they are numbers written in hexadecimal. So you would need to convert the checksum to decimal number.
Because the checksum is a big number, you would need to limit the hexadecimal number to 8 characters for Numeric (unsigned) type before conversion. Or if you would want higher precision, you could use float (but that would be more work):
Numeric (unsigned) - 64bit unsigned integer
Numeric (float) - floating point number
Negative values can be stored.
Allowed range (for MySQL): -999999999999.9999 to 999999999999.9999 (double(16,4)).
I wish Zabbix would have .hashedUnsigned() function that would compute hash of a string and return it as a number. Such a function should be easy to write.