"Can we read Indexed file using JCL?" - mainframe

I am looking to read indexed file using JCL is there any possibility of doing like that? Like there is one KSDS file and we have to read that file using indices and we have to print the selected record onto the console using only JCL no usage of COBOL..

I believe the program you are looking to execute with your JCL is IDCAMS, and that you want to use the PRINT FROMKEY() TOKEY() command.
That hyperlink is to the IBM Documentation, a comprehensive set of documentation for z/OS and many of its components. Other IBM products such as Enterprise COBOL, CICS, DB2, and MQ have their own Documentation sites. If you're going to be using an IBM mainframe, it's a good idea to bookmark the sites for the products you use and become familiar with them.
This will not display output on the console, but it will display output on the SYSPRINT DD. I'm not sure if there's a way to display this output on the console (which is where the interface used by mainframe operators), typically that's where messages essential to system health and continued functioning are displayed. If you displayed the output you requested on the console I suspect you'd get a request to not do that right quick.
#NicC is quite correct in saying that the JCL is not doing anything other than requesting the IDCAMS program (in this particular case) be executed. If you're a Linux person, think of it this way:
Suppose you have a shell script...
#! /bin/bash
sort < $1
...would you say the script is doing the work, or the sort program?
JCL has no looping constructs, no way to programmatically alter variables. JCL allows you to request that programs be executed by the operating system and gives you a way to specify their inputs and outputs.

Related

Detecting if the code read the specified input file

I am writing some automated tests for testing code and providing feedback to the programmer.
One of the requirements is to detect if the code has successfully read the specified input file. If not - we need to provide feedback to the user accordingly. One way to detect this was atime timestamp, but since our server drive is mounted with relatime option - we are not getting atime updates for every file read. Changing this option to record every atime is not feasible as it slows down our I/O operations significantly.
Is there any other alternative that we can use to detect if the given code indeed reads the specified input file?
Here's a wild idea: intercept read call at some point. One of possible approaches goes more or less like this:
The program makes all its reading through an abstraction. For example, MyFileUtils.read(filename) (custom) instead of File.read(filename) (stdlib).
During normal operation, MyFileUtils simply delegates the work to File (or whatever system built-in libraries/calls you use).
But under test, MyFileUtils is replaced with a special test version which, along with the delegation, also reports usage to the framework.
Note that in some environments/languages it might be possible to inject code into File directly and the abstraction will not be needed.
I agree with Sergio: touching a file doesn't mean that it was read successfully. If you want to be really "sure"; those programs have to "send" some sort of indication back. And of course, there are many options to get that.
A pragmatic way could be: assuming that those programs under test create log files; your "test monitor" could check that the log files contain fixed entries such as "reading xyz PASSED" or something alike.
If your "code under test" doesn't create log files; maybe: consider changing that.

Sensitive Data in Command Line Interfaces

I know it's frowned upon to use passwords in command line interfaces like in this example:
./commandforsomething -u username -p plaintextpassword
My understanding that the reason for that (in unix systems at least) is because it'll be able to be read in the scrollback as well as the .bash_history file (or whatever flavor shell you use).
HOWEVER, I was wondering if it was safe to use that sort of interface with sensitive data programatically while programming things. For example, in perl, you can execute a command using two ``, the exec command, or system command (I'm not 100% sure on the differences between these apart from the return value from the two backticks being the output of the executed command versus the return value... but that's a question for another post I guess).
So, my question is this: Is it safe to do things LIKE
system("command", "userarg", "passwordarg");
as it essentially does the same thing, just without getting posted in scrollback or history? (note that I only use perl as an example - I don't care about the answer specific to perl but instead the generally accepted principle).
It's not only about shell history.
ps shows all arguments passed to the program. The reason why passing arguments like this is bad is that you could potentially see other users' passwords by just looping around and executing ps. The cited code won't change much, as it essentially does the same.
You can try to pass some secrets via environment, since if the user doesn't have an access to the given process, the environment won't be shown. This is better, but is a pretty bad solution too (e.g.: in case program fails and dumps a core, all passwords will get written to disk).
If you use environment variables, use ps -E which will show you environment variables of the process. Use it as a different users than the one executing the program. Basically simulate the "attacker" and see if you can snoop the password. On a properly configured system you shouldn't be able to do it.

Unix and Linux /proc PID system

For my intro to operating systems class we were introduced to the /proc directory and many of the features that can be used to access data stored in the process ID's that are available in /proc.
When I was trying out some commands learned (and a few I looked up) on the UNIX server hosted by my school I noticed that some sub directories that were present in a process, that I created, were a file type called "TeX font metric data" or a .tfm file. I figured that was the file type that was used when my professor showed us how to get data from the directories like status and map.
When I entered the command cat /proc/(PID)/status to look into the status file I got a random assortment of characters and white space. When I tried the same command on a process I created in my schools Linux server I was shown the information I expected to see in the status and map files.
My question is:
Why did the Unix server produce the random characters from my process's /proc/(PID)/status file while the Linux server gave me the data I would expect from the same command? Also Is there a way to access the Unix /proc data by accessing the /proc directory?
The Linux procfs you are familiar with, aka /proc/ is not a POSIX thing. It's OS-specific and multiple OSes just happen to implement similar things also called /proc.
Because no formal standard covers it, it's allowed to be / going to be different on any *nix-like system that implements it.
My guess with /proc/(PID)/status is that your UNIX is dumping the process status in a binary form instead of easy to read plain text.
See also:
Knowing the process status using procf/<pid>/status
If you can determine WHAT Unix you're on (odds are, Solaris since there's a free variant) you should be able to find a more specific answer.

Can you load a tree structure in memory with Linux shell?

I want to create an application with a Linux shell script like this — but can it be done?
This application will create a tree containing data. The tree should be loaded in the memory. The tree (loaded in memory) could be readable from any other external Linux script.
Is it possible to do it with a Linux shell?
If yes, how can you do it?
And are there any simple examples for that?
There are a large number of misconceptions on display in the question.
Each process normally has its own memory; there's no trivial way to load 'the tree' into one process's memory and make it available to all other processes. You might devise a system of related programs that know about a shared memory segment (somehow — there's a problem right there) that contains the tree, but that's about it. They'd be special programs, not general shell scripts. That doesn't meet your 'any other external Linux script' requirement.
What you're seeking is simply not available in the Linux shell infrastructure. That answers your first question; the other two are moot given the answer to the first.
There is a related discussion here. They use shared memory device /dev/shm and, ostensibly, it works for multiple users. At least, it's worth a try:
http://www.linuxquestions.org/questions/linux-newbie-8/bash-is-it-possible-to-write-to-memory-rather-than-a-file-671891/
Edit: just tried it with two users on Ubuntu - looks like a normal directory and REALLY WORKS with the right chmod.
See also:
http://www.cyberciti.biz/tips/what-is-devshm-and-its-practical-usage.html
I don't think there is a way to do this as if you want to keep all the requirements of:
Building this as a shell script
In-memory
Usable across terminals / from external scripts
You would have to give up at least one requirement:
Give up shell script req - Build this in C to run as a Linux process. I only understand this up to the point to say that it would be non-trivial
Give up in-memory req - You can serialize the tree and keep the data in a temp file. This works as long as the file is small and performance bottleneck isn't around access to the tree. The good news is you can use the data across terminals / from external scripts
Give up usability from external scripts req - You can technically build a script and run it by sourcing it to add many (read: a mess of) variables representing the tree into your current shell session.
None of these alternatives are great, but if you had to go with one, number 2 is probably the least problematic.

Share data between bash and php

We are trying to create a project that will run on linux. But we want to see results from browser. We want to use PHP for that but we are not really sure how to share data between those two environment. We dont want to use MySql or any other dbms for that not to use ram just for 1 or 2 data.
So the question is; "We want to share 1 or at most 2 data between bash and PHP. How can we do this without third party application or server ?"
Thanks for answers
Baris
I assume you have a bash script and you want to run that from PHP and handle the output in some way. PHP has several functions to accomodate that.
The backtick operator example in the PHP docs is:
<?php
$output = `ls -al`;
echo "<pre>$output</pre>";
?>
It's not entirely clear from your question what you're trying to do...but one possible solution would be to write your data to a file or two. Either PHP or your cli environment can read/write the file to get/set the data. Note that if both environments expect to write the data you would need to implement some sort of locking mechanism.
You could also use something like memcached, which would provide you with a memory-based key/value store without the "overhead" of a complete RDBMS solution.
This is really a Stack Exchange question, but PHP runs fine in CLI mode and can do whatever bash would do, without much extra effort. So I would use PHP for the background or CLI process (optionally calling a bash shell script as needed if you want to use PHP as a wrapper only) then use PHP also for the web part. The two parts can share PHP code if required, particularly the part that reads/writes from a shared file.
Locking of this file will be the main challenge as mentioned - creating a lock directory with mkdir (since it's atomic) would be one way, but you'd also need to ensure locks can be cleaned up.
UPDATE: this approach would also work if you want to write to shared RAM instead of a file - you could use memcached or similar from PHP, but there is no way to do that from bash.

Resources