Quickjob MOVZON X'FF' to OFA1 - mainframe

what does MOVZON X'FF' do in quickjob. I believe it just moves input to output. Please let me know, if I am wrong.

The smallest unit of information is the bit. Processors usually don‘t work on single bits when accessing memory; they work on bytes. A byte consists of 8 consecutive bits (for most architectures).
To describe how different processor instructions work with bytes, bytes are sometimes subdivided into two 4-bit groups, called nibbles. Counting left to right, bits 0-3 are called „left nibble“, „high order nibble“, or „zone nibble“. Bits 4-7, the right half, are called „right nibble“, „low order nibble“, or „number nibble“.
There are instructions that work on the whole byte, e.g. MOVE. And there are instructions that work on nibbles. MOVEZONE (MOVZON) works on zone nibbles and leaves the number nibbles alone; MOVENUM (MOVNUM) works on number nibbles, and leaves the zone nibbles alone.
This kind of instructions are usually used with bytes that contain numeric values, coded as either zoned decimal, or packed decimal. They are rather exotic when working on text data.

This reference is used.
Given the instruction:
MOVZON X'FF' to OFA1
The receiving field OFA1 refers to the first record position (the 1) of the output file ( the OF) designated as A. The instruction will set the high-order bits (0-3 or "zone bits") of the first position to ones, matching bits 0-3 of the X'FF'.
However, it appears, as a matter of style, the instruction should have been written as MOVZON X'F0' TO OAF1 since the low-order bits (4-7) are not used.

Related

Problems with SHA 2 Hashing and Java

I am working on following the SHA-2 cryptographically functions as stated in https://en.wikipedia.org/wiki/SHA-2.
I am examining the lines that say:
begin with the original message of length L bits append a single '1' bit;
append K '0' bits, where K is the minimum number >= 0 such that L + 1 + K + 64 is a multiple of 512
append L as a 64-bit big-endian integer, making the total post-processed length a multiple of 512 bits.
I do not understand the last two lines. If my string is short can its length after adding K '0' bits be 512. How should I implement this in Java code?
First of all, it should be made clear that the "string" that is talked about is not a Java String but a bit string. These algorithms are binary/bit based. The implementation will generally not handle bits but bytes. So there is a translation phase where you should see bytes instead of bits.
SHA-512 is operated on in blocks of 512 bits (SHA-224/256) or 1024 bits (SHA-384/512). So basically you have a 64 or 128 byte buffer that you are filling before operating on it. You could also directly cache the data in 32 bit int fields (SHA-224/256) or 64 bit long fields, as that is the word size that is operated on.
Now the padding is relatively simple procedure. The padding is called bit-padding. As it is used in big-endian mode (SHA-2 fortunately uses this instead of the braindead little endian mode in SHA-3) the padding consists of a single bit set on the highest order bit in a byte, with the rest filled by zero's. That makes for a value of (byte) 0x80 which must be put in the buffer.
If you cannot create this padding because the buffer is full then you will have to process the previous block, and then set the first bit of the now available buffer to (byte) 0x80. In the newer Java you can also use (byte) 0b1_0000000 byte the way, which is more explicit.
Now you simply add zero's until you have 8 to 16 bytes left, again depending on the hash output size used. If there aren't enough bytes then fill till the end, process the block, and re-start filling with zero bytes until you have 8 or 16 bytes left again.
Now finally you have to encode the number of bits in those 8 or 16 bytes you've left. So multiply your input by eight, and make sure you encode those bytes in the same way as you'd expect in Java with the least significant bits as much to the right as possible. You might want to use https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#putLong-long- for this if you don't want to program it yourself. You may probably forget about anything over 2^56 bytes anyway, so if you have SHA-384/SHA-512 then simply set the first eight bytes to zero.
And that's it, except that you still need to process that last block and then use as many bytes from the left as required for your particular output size.

What does the IS_ALIGNED macro in the linux kernel do?

I've been trying to read the implementation of a kernel module, and I'm stumbling on this piece of code.
unsigned long addr = (unsigned long) buf;
if (!IS_ALIGNED(addr, 1 << 9)) {
DMCRIT("#%s in %s is not sector-aligned. I/O buffer must be sector-aligned.", name, caller);
BUG();
}
The IS_ALIGNED macro is defined in the kernel source as follows:
#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
I understand that data has to be aligned along the size of a datatype to work, but I still don't understand what the code does.
It left-shifts 1 by 9, then subtracts by 1, which gives 111111111. Then 111111111 does bitwise-and with x.
Why does this code work? How is this checking for byte alignment?
In systems programming it is common to need a memory address to be aligned to a certain number of bytes -- that is, several lowest-order bits are zero.
Basically, !IS_ALIGNED(addr, 1 << 9) checks whether addr is on a 512-byte (2^9) boundary (the last 9 bits are zero). This is a common requirement when erasing flash locations because flash memory is split into large blocks which must be erased or written as a single unit.
Another application of this I ran into. I was working with a certain DMA controller which has a modulo feature. Basically, that means you can allow it to change only the last several bits of an address (destination address in this case). This is useful for protecting memory from mistakes in the way you use a DMA controller. Problem it, I initially forgot to tell the compiler to align the DMA destination buffer to the modulo value. This caused some incredibly interesting bugs (random variables that have nothing to do with the thing using the DMA controller being overwritten... sometimes).
As far as "how does the macro code work?", if you subtract 1 from a number that ends with all zeroes, you will get a number that ends with all ones. For example, 0b00010000 - 0b1 = 0b00001111. This is a way of creating a binary mask from the integer number of required-alignment bytes. This mask has ones only in the bits we are interested in checking for zero-value. After we AND the address with the mask containing ones in the lowest-order bits we get a 0 if any only if the lowest 9 (in this case) bits are zero.
"Why does it need to be aligned?": This comes down to the internal makeup of flash memory. Erasing and writing flash is a much less straightforward process then reading it, and typically it requires higher-than-logic-level voltages to be supplied to the memory cells. The circuitry required to make write and erase operations possible with a one-byte granularity would waste a great deal of silicon real estate only to be used rarely. Basically, designing a flash chip is a statistics and tradeoff game (like anything else in engineering) and the statistics work out such that writing and erasing in groups gives the best bang for the buck.
At no extra charge, I will tell you that you will be seeing a lot of this type of this type of thing if you are reading driver and kernel code. It may be helpful to familiarize yourself with the contents of this article (or at least keep it around as a reference): https://graphics.stanford.edu/~seander/bithacks.html

What do xfrm_replay_state_esn fields mean?

I'm trying to understand a little bit more about Linux kernel IPSec networking by looking at the kernel source. I understand conceptually that IPSec prevents replay attacks with a sequence number and a replay window, i.e. if a recipient receives a packet with a sequence number that is not within the replay window, or it has received before, then it drops that packet and increments the replay counter.
I'm trying to correlate this to the structure xfrm_replay_state_esn which is defined as such:
struct xfrm_replay_state_esn {
unsigned int bmp_len;
__u32 oseq;
__u32 seq;
__u32 oseq_hi;
__u32 seq_hi;
__u32 replay_window;
__u32 bmp[0];
};
I've tried searching for documentation, but it's scant and I haven't been able to find a man of the various functions and structures, so I don't understand what the individual fields relate to.
XFRM is an IPSec implementation for the Linux kernel. The name XFRM stands for "transform" referencing the transformation of IP packets as per the IPSec protocol.
The following RFCs are relevant for IPSec:
RFC4301: Definition of the IPSec protocol.
RFC4302: Definition of the Authentication Header (AH) sub-protocol for ensuring authenticity of IP packets.
RFC4303: Definition of the Encapsulating Security Payload (ESP) sub-protocol for ensuring authenticity and secrecy of IP packets.
The IPSec protocol allows for sequence numbers of size 32 bits or 64 bits. The 64 bit sequence numbers are referred to as Extended Sequence Numbers (ESN).
The anti-replay mechanism is defined in the RFCs for both AH and ESP. The mechanism keeps a window of acceptable sequence numbers of incoming packets. The window extends back from the highest sequence number received so far, defining a lower bound for the acceptable sequence numbers. When receiving a sequence number below that bound, it is rejected. When receiving a sequence number higher than the current highest sequence number, the window is shifted forward. When receiving a sequence number within the window, the mechanism will mark this sequence number in a checklist for ensuring that each sequence number in the window is only received once. If the sequence number has already been marked, it is rejected.
This checklist can be implemented as a bitmap, where each sequence number in the window is represented by a single bit, with 0 meaning this sequence number has not been received yet, and 1 meaning it has already been received.
Based on this information, the meaning of the fields in the xfrm_replay_state_esn struct can be given as follows.
The struct holds the state of the anti-replay mechanism with extended sequence numbers (64 bits):
The highest sequence number received so far is represented by seq and seq_hi. Each is a 32 bit integer, so together they can represent a 64 bit number, with seq holding the lower 32 bit and seq_hi holding the higher 32 bit. The reason for splitting the 64 bit value into two 32 bit values, instead of representing it as a single 64 bit variable, is that the IPSec protocol mandates an optimization where only the lower 32 bit of the sequence number are included in the package. For this reason, it is more convenient to have the lower 32 bits as a separate variable in the struct, so that it can be accessed directly without resorting to bit-operations.
The sequence number counter for outgoing packages is tracked in oseq and oseq_hi. As before, the 64 bit number is represented by two 32 bit variables.
The size of the window is represented by replay_window. The smallest acceptable sequence number if given by the sequence number expressed by seq and seq_hi minus replay_window plus one.
The bitmap for checking off received sequence numbers within the window is represented by bmp. It is defined as a zero-sized array, but when the memory for the struct is allocated, additional memory is reserved after the struct, which can then be accessed e.g. with bmp[i] (which is of course just syntactic sugar for *(bmp+i)). The size of the bitmap is held in bmp_len. It is of course related to the window size, i.e. window size divided by 8*sizeof(u32), rounded up. I would speculate that it is stored explicitly to avoid having to recalculate this value frequently.

When will CPSR GE[3:0] bits be modified

I read in ARM docs that:
GE[3:0], bits[19:16]
The instructions described in Parallel addition and subtraction instructions on
page A4-171 update these flags to indicate the results from individual bytes or halfwords
of the operation. These flags can control a later SEL instruction.
So apparently GE[3:0] stands for "eq/lt/gt"?
I came into a couple of strange issues which I yet don't have a clue, but they all have CPSR value xxxf0030, so the GE bits are 0b1111? What does that stands for? Is it normal for these GE bits?
Thanks in advance!
In the ARMv7 ARM (which matches that text), the details of how the GE flags get set are only in the operation pseudocode of the parallel instructions themselves. Sadly, they seem to have removed this nice prose description which was in the ARMv6 ARM:
Instructions that operate on halfwords:
set or clear GE[3:2] together, based on the result of the top halfword calculation
set or clear GE[1:0] together, based on the result of the bottom halfword calculation.
Instructions that operate on bytes:
set or clear GE[3] according to the result of the top byte calculation
set or clear GE[2] according to the result of the second byte calculation
set or clear GE[1] according to the result of the third byte calculation
set or clear GE[0] according to the result of the bottom byte calculation.
Each bit is set (otherwise cleared) if the results of the
corresponding calculation are as follows:
for unsigned byte addition, if the result is greater than or equal to 2^8
for unsigned halfword addition, if the result is greater than or equal to 2^16
for unsigned subtraction, if the result is greater than or equal to zero
for signed arithmetic, if the result is greater than or equal to zero.
As arithmetic flags, they could have any old value (undefined at reset, and can be freely written to via APSR), so until you've specifically used one of the instructions which sets them, they're pretty much meaningless and can be ignored.

Why is string manipulation more expensive?

I've heard this so many times, that I have taken it for granted. But thinking back on it, can someone help me realize why string manipulation, say comparison etc, is more expensive than say an integer, or some other primitive?
8bit example:
1 bit can be 1 or 0. With 2 bits you can represent 0, 1, 2, and 3. And so on.
With a byte you have 2^8 possibilities, from 0 to 255.
In a string a single letter is stored in a byte, so "Hello world" is 11 bytes.
If I want to do 100 + 100, 100 is stored in 1 byte of memory, I need only two bytes to sum two numbers. The result will need again 1 byte.
Now let's try with strings, "100" + "100", this is 3 bytes plus 3 bytes and the result, "100100" needs 6 bytes to be stored.
This is over-simplified, but more or less it works in this way.
The int data type in C# was carefully selected to be a good match with processor design. Which can store an int in a cpu register, a storage location that's an easy factor of 3 faster than memory. And a single cpu instruction to compare values of type int. The CMP instruction runs in less than a single cpu cycle, a fraction of a nano-second.
That doesn't work nearly as well for a string, it is a variable length data type and every single char in the string must be compared to test for equality. So it is automatically proportionally slower by the size of the string. Furthermore, string comparison is afflicted by culture dependent comparison rules. The kind that make "ss" and "ß" equal in German and "Aa" and "Å" equal in Danish. Nothing subtle to deal with, taken care of by highly optimized table-driven code inside the CLR. It can't beat CMP.
I've always thought it was because of the immutability of strings. That is, every time you make a change to the string, it requires allocating memory for a whole new string (rather than modifying the original in place).
Probably a woefully naive understanding but perhaps someone else can expound further.
There are several things to consider when looking at the "cost" of manipulating strings.
There is the cost in terms of memory usage, there is the cost in terms of CPU cycles used, and there is a cost associated with the complexity of the code involved.
Integer manipulation (Add, Subtract, Multipy, Divide, Compare) is most often done by the CPU at the hardware level, in few (or even 1) instruction. When the manipulation is done, the answer fits back in the same size chunk of memory.
Strings are stored in blocks of memory, which have to be manipulated a byte or word at a time. Comparing two 100 character long strings may require 100 separate comparison operations.
Any manipulation that makes a string longer will require, either moving the string to a bigger block of memory, or moving other stuff around in memory to allow growing the existing block.
Any manipulation that leaves the string the same, or smaller, could be done in place, if the language allows for it. If not, then again, a new block of memory has to be allocated and contents moved.

Resources