How to find the number of bit of OS/390 or a z/OS? - mainframe

What is the command for finding the number of bit of an OS/390 or a z/OS?

Since there didn't seem to be a "real" answer on this thread, I thought I'd provide one just in case anyone needs the information...
The definitive source of whether you're running in 64-bit mode is the STORE FACILITY LIST (STFL, or STFLE) hardware instruction. It sets two different bits - one to indicate that the 64-bit zArchitecture facility is installed, and one to indicate that the 64-bit zArchitecture facility is active (it was once possible to run in 31-bit mode on 64-bit hardware, so this would give you the "installed, but not active" case).
The operating system generously issues STFL/STFLE during IPL, saving the response in the PSA (that's low memory, starting at location 0). This is handy, since STFL/STFLE are privileged instructions, but testing low storage doesn't require anything special. You can check the value at absolute address 0xc8 (decimal 200) for the 0x20 bit to tell that the system is active in 64-bit mode, otherwise it's 31-bit mode.
Although I doubt there are any pre-MVS/XA systems alive anymore (that is, 24-bit), for completeness you can also test CVTDCB.CVTMVSE bit - if this bit is not set, then you have a pre-MVS/XA 24-bit mode system. Finding this bit is simple - but left as an exercise for the reader... :)
If you're not able to write a program to test the above, then there are a variety of ways to display storage, such as TSO TEST or any mainframe debugger, as well as by looking at a dump, etc.

While I was not able to find commands to give this information, I think below is what you're looking for:
According to this: https://en.wikipedia.org/wiki/OS/390
z/OS is OS/390 with various extensions including support for 64-bit architecture.
So if you're on a zSeries processor with z/OS, you're on 64-bit.
According to this: https://en.wikipedia.org/wiki/IBM_ESA/390
OS/390 was installed on ESA/390 computers, which were 32-bit computers, but were 31-bit addressable.

For either z/OS or OS/390 I believe you can do a D IPLINFO and look for ARCHLEVEL. ARCHLEVEL 1 = 31 bit, ARCHLEVEL 2 = 64 bit. But it's been a very long time since I've been on an OS/390 system.

Related

Delphi 11.2: CreateWindowEx fails thread on x64

I'm using Peter Below's PBThreadedSplashForm to display a splash window during application startup.
This component worked great for 10 years, but, since updating my Delphi to 11.2, I get an AV on the CreateWindowEx call.
This happens on Win64 platform only, on problems on Win32.
Anyone who knows what can be the cause of this?
This is one of the many issues that have surfaced in 11.2 due to the new default ASLR settings in the compiler and linker.
After a very quick glance at the source code I see this:
SetWindowLong( wnd, GWL_WNDPROC, Integer( thread.FCallstub ));
thread.FCallstub is defined as Pointer.
Just as I thought.
You see, pointers are of native size, so in 32-bit applications, pointers are 32 bits wide, while in 64-bit applications, pointers are 64 bits wide.
It was very common in the 32-bit world that pointer values were temporarily saved in Integers. This worked because a 32-bit pointer fits in a 32-bit Integer.
But in a 64-bit application, this is an obvious bug, since a 64-bit pointer doesn't fit in a 32-bit Integer. It's like taking a phone number like 5362417812 and truncating it to 17812, hoping that it will still "work".
Of course, in general, this causes bugs such as AVs and memory corruption.
However, until recently, there was a rather high probability that a pointer in a 64-bit Delphi application by "chance" didn't use its 32 upper bits (so it was like maybe $0000000000A3BE41, and so truncating it to $00A3BE41 didn't have any effect). So it seemed to work most of the time, but only by accident.
Now, recent versions of the Delphi compiler and linker enables ASLR, making such accidents much less likely.
And this is a good thing: If you have a serious bug in your code, it is better if you discover it right away and not "randomly" at your customers.
So, to fix the issue, you need to go through the code and make sure you never store a pointer in a 32-bit Integer. Instead, use a native-sized NativeInt, Pointer, LPARAM, or whatever is semantically appropriate.
(Disabling ASLR will also make it work in "many" cases by accident again, but this is a very bad approach. Your software still has a very serious bug that may manifest itself at any time.)
In your code, there is also
Integer( Pchar( FStatusMessage ))
Integer( Pchar( msg ))

How to use sysenter for 64 bits userland programs linked against SystemⅤ libraries?

Is it possible to use sysenter in a 64 bits program on Linux ? Or is it impossible to adapt the use of sysenter with the SystemⅤ calling convention without getting other dynamic link libraries crashing (I know the 32 bits way won’t work but I just want to know if it’s possible to work around this like withint 0x80) ?
There is very few documentation around using sysenter in 32 bits, so I couldn’t found anything for 64 bits.
I know this not recommended but it’s the only opcode I can use to trigger a system call as part of bug bounty hunting exploit where the program need to exit using a special function that can be trigger only from normal execution.
It is possible to use them, but they use the 32-bit entry point of the kernel (check the code for more).
The actual location (and code) of this entry point depends on you kernel version.
For versions 4.2 and newer it is entry_SYSENTER_32.
For versions 4.1 and older it is ia32_sysenter_target.
Finally, SYSRET is not available at userspace (it can only be executed from ring 0). Check the Intel manual description of the instruction.

Xen binary rewriting method

In full virtualization, what is the CPL of guest OS?
in paravertualiation, CPL of guest OS is 1(ring 1)
is it same in full virtualization?
and I heard that some of the x86 privileged instructions are
not easily handled, thus "binary rewriting" method is required...
how does this "binary rewriting" happens??
I understand that in virtualization, CPU is not emulated.
so how can hypervisor change the binary instruction codes before
the CPU executes them?? do they predict the next instruction on memory and
update the memory contents before CPU gets there??
if this is true, I think hypervisor code(performing binary rewriting)
needs to intercept the CPU every time before some instruction of guest OS is
executed. I think this is absurd.
specific explanation will be appreciated.
thank you in advance..!!
If by full virtualization, you mean hardware supported virtualization, then the CPL of the guest is identical to if it was running on bare-metal.
Xen never rewrites the binary.
This is something that VMWare (as far as I understand). To the best of my understanding (but I have never seen the VMWare source code), the method consists of basically doing runtime patching of code that needs to run differently - typically, this involves replacing an existing op-code with something else - either causing a trap to the hypervisor, or a replacement set of code that "does the right thing". If I understand how this works in VMWare is that the hypervisor "learns" the code by single-stepping through a block, and either applies binary patches or marks the section as "clear" (doesn't need changing). The next time this code gets executed, it has already been patched or is clear, so it can run at "full speed".
In Xen, using paravirtualization (ring compression), then the code in the OS has been modified to be aware of the virtualized environment, and as such is "trusted" to understand certain things. But the hypervisor will still trap for example writes to the page-table (otherwise someone could write a malicious kernel module that modifies the page-table to map in another guest's memory, or some such).
The HVM method does intercept CERTAIN instructions - but the rest of the code runs at normal full speed, thanks to the hardware support in modern processors, such as SVM in AMD and VMX in Intel processors. ARM has a similar technology in the latest models of their processors, but I'm not sure what the name of it is.
I'm not sure if I've answered quite all of your questions, if I've missed something, or it's not clear enough, feel free to ask...

glibc fnmatch vulnerability: how to expose the vulnerability?

I have to validate a vulnerability on one of our 64-bit systems which is running glibc-2.9 .
http://scarybeastsecurity.blogspot.in/2011/02/i-got-accidental-code-execution-via.html
The above link gives a script which when passed a magic number apparently leads to arbitrary code execution. But when I tried it on my system, nothing seems to be happening.
Am I doing something wrong? Does the system crash if the vulnerability exists? How do I detect if it's accidental code execution?
If you're running on a 64-bit machine then the original circumstances of the bug don't apply. As you can see in Chris' blog, he's using a 32-bit Ubuntu 9.04 system. The exploit relies on causing the stack pointer to wrap about the 32-bit address space, leading to stack corruption.
I gave it a quick try on a 64-bit system with glibc 2.5, but saw malloc() failures instead of crashes.
$ ./a.out 3000000000
a.out: malloc() failed.
You asked how to identify accidental code execution; with the toy program here, which doesn't carry an exploit / payload, we'd expect to see either a SIGSEGV, SIGILL, or SIGBUS as the CPU tried to "execute" junk parts of the stack, showing up as the respective error message from the shell.
If you were to run into the problem on a 64-bit machine, you'd have to mimic the original code but provide a number that wraps the stack on a 64-bit machine. The original number provided was:
1073741796
$ bc
z=1073741796
z+28
1073741824
(z+28)*4
4294967296
2^32
4294967296
quit
$
So, one way of describing the input number is (ULONG_MAX - 112) / 4.
The analogue number for a 64-bit machine is 4611686018427387876:
$ bc
x=2^64
x
18446744073709551616
y=x/4
y
4611686018427387904
y-28
4611686018427387876
quit
$
However, to stand a chance of this working, you'd have to modify the reported code to use strtroull() or something similar; atoi() is normally limited to 32-bit integers and would be no use on the 64-bit numbers above. The code also contains:
num_as = atoi(argv[1]);
if (num_as < 5) {
errx(1, "Need 5.");
}
p = malloc(num_as);
Where num_as is a size_t and p is a char *. So, you'd have to be able to malloc() a gargantuan amount of space (almost 4 EiB). Most people don't have enough virtual memory on their machines, even with disk space for backing, to do that. Now, maybe, just maybe, Linux would allow you to over-commit (and let the OOM Killer swoop in later), but the malloc() would more likely fail.
There were other features that were relevant and affect 32-bit systems in a way that it cannot affect 64-bit systems (yet).
If you're going to stand a chance of reproducing it on a 64-bit machine, you probably have to do a 32-bit compilation. Then, if the wind is behind you and you have appropriately old versions of the relevant software perhaps you can reproduce it.

What was the single byte change to port WordStar from CP/M to DOS?

I was re-reading Joel's Strategy Letter II: Chicken and Egg problems and came across this fun quote:
In fact, WordStar was ported to DOS
by changing one single byte in the
code. (Real Programmers can tell you
what that byte was, I've long since
forgotten).
I couldn't find any other references to this with a quick Google search. Is this true or just a figure of speech? In the interest of my quest to become a "Real Programmer", what was the single byte change?
Sounds a bit exaggerated, found some WordStar history here
WordStar 3.0 for MS-DOS
Apr 1982
In one single all-night session Jim Fox patched the CP/M-86 version of WordStar to make it run under MS-DOS on the IBM PC so that it could be demonstrated to Rubenstein. The actual port was done by a group of Irish programmers using Intel development systems, which ran the ISIS II operating system. The software build was done on 8" floppies and the binary (executable) files were then transferred to the IBM PC by serial cable.
But...Joel maybe meant MS-DOS 1.0 / QDOS
MS-DOS 1.0 was actually a renamed version of QDOS (Quick and Dirty Operating System), which Microsoft bought from a Seattle company, appropriately named Seattle Computer Products, in July 1981. QDOS had been developed as a clone of the CP/M eight-bit operating system in order to provide compatibility with the popular business applications of the day such as WordStar and dBase. CP/M (Control Program for Microcomputers) was written by Gary Kildall of Digital Research several years earlier and had become the first operating system for microcomputers in general use.
You'd have to change more than one byte. CP/M-86 executables (.CMD) all have a 128-byte header, which isn't anything like the .EXE header.
If you restrict all your API calls to the common subset of CP/M and DOS, then you can use conditional assembly to build CP/M and DOS versions from the same source:
bdos:
if CPM86
int 0E0h
else
mov ah, cl
int 21h
endif
This Wikipedia entry claims that CP/M and MS-DOS share binary formats. It goes on to say:
Although the file format is the same
in MS-DOS and CP/M, this does not mean
that CP/M programs can be directly
executed under MS-DOS or vice versa;
MS-DOS COM files contain x86
instructions, while CP/M COM files
contain 8080, 8085 or Z80
instructions.
Under CP/M 3, if the first byte of a
COM file is 0xC9 then this indicates
the presence of a 256-byte header;
since 0xC9 corresponds to the 8080
instruction RET, this means that the
COM file will immediately terminate if
run on an earlier version of CP/M that
does not support this extension.
This implies that perhaps the fix/port was changing this first instruction into something else, that alowed the rest to execute. Not sure though, that seems to imply that the binary must have been "fat", which seems unreasonable for a legacy binary.
WordStar was written in 8080 assembler, and there were tools back then to convert 8080 to 8086 assembler (the 8086 instruction set was designed to allow this) if all the code could fit into a single segment, so this is quite possible.
I first used WordStar in 1979, on a Z80 CP/M box. People today might not realise how lucky they are - how many MS Word users would be prepared as the first task on installing their word processor to have to write a couple of small assembler routines (in hex!) to interface the word processor efficiently (you could use the CP/M routines but they were dog slow and didn't work properly) with the screen and keyboard? Happy days...
"In fact, WordStar was ported to DOS by changing one single byte in the code. (Real Programmers can tell you what that byte was, I've long since forgotten)."
Mon 6/08/2009 6:27 pm. My assumption was Spolsky was talking about 8080 CP/M to 8086 MSDOS, and that the story is probably bogus. 8086 CP/M was never a big item -- I mean, it was crushed utterly by MSDOS -- and someone may have converted WordStar from 8080-CPM to 8086-CPM -- by reassembling it, as others have noted, using a special 8080-to-8086 translator thingey -- and then perhaps only a single byte had to be changed.
I'm not sure whether Joel's statement is accurate or not. Perhaps he meant the demonstration version that Jim Fox made?
See http://www.wordstar.org/wordstar/history/history.htm
I'll quote the pertinent section:
WordStar 3.0 for MS-DOS
Apr 1982
In one single all-night session Jim
Fox patched the CP/M-86 version of
WordStar to make it run under MS-DOS
on the IBM PC so that it could be
demonstrated to Rubenstein. The actual
port was done by a group of Irish
programmers using Intel development
systems, which ran the ISIS II
operating system. The software build
was done on 8" floppies and the binary
(executable) files were then
transferred to the IBM PC by serial
cable.
(Edit: Oops, too late. Someone else already found the exact same thing :-/ Feel free to ignore me.)
An important thing to understand is that at the time 16-bit 8086 machines was just coming out to replace the current 8-bit machines, where the CP/M operating system was the Windows of the day. Everything with a disk drive intended for work ran CP/M. That version was later called CP/M-80 to differentiate it from CP/M-86 for the 8086 processor.
Unfortunately that took so long to get to market that QDOS was written to have SOMETHING to run programs on, and that was essentially a quick reimplementation of the CP/M functions (but with a different syntax). QDOS was later bought by Microsoft and made into MS-DOS. Hence MS-DOS actually has a CP/M core deep deep inside, and therefore the amount of work needed to get a CP/M-86 program to run under MS-DOS was limited (not to a single byte, but manageable).
I had the pleasure to work a few years with CCP/M-86 which allowed multitasking very similar to what Linux in text mode (with virtual consoles) allow today. Unfortunately it never caught on. Oh, well, we have Linux :)

Resources