Unload a file from a ftp and rename it in host - mainframe

I have one file delivered in a ftp daily. This file doesn´t have the same name everyday. It has the date and the hour of the creation. For example, today the file has the name 20130814_XX_YY_20130814152345, created at 15:23:45 and tomorrow the file can name 20130815_XX_YY_20130815152421. The _XX_YY_ is always the same but the hour will change everyday.
I want to create a host jcl that gets this file with variable name and rename it to a host file. How can I do this ?
Thank you
Regards
Chuchito

STEP1: You can use LS in FTP to write to disk, so you can have a file with the file-name in it. Then GET that file.
STEP2: Process the contents of your file to generate the FTP Control Cards (at least for the GET). The GET generated will be of the form GET 20130814_XX_YY_20130814152345 'HLQ.MAINFRAM.DATASET', where the server name has come from the file GETted in STEP1 and the local (Mainframe) file can be hard-coded, or supplied to the generation if flexibility is required.
STEP3: Run FTP again with the Control Card(s) generated.
Isn't there anything in the Spec?

Sometimes we create complexities where an "out of the box" solution simplifies life considerably.
After the post updated, I now understand the problem a bit better.
If the name is required to be so specific, then the other suggested solution (if i understand it) is to have a fixed file name on the server that contains a list of file names to be uploaded.
In fact, the server could create a fixed file name that is really the JCL to run on the mainframe!!! This file would include the //SYSIN DD * and GET commands! The mainframe uploads this file and submits it as-is to the job reader, which then runs on the mainframe. The last step of this job (created by the server, but run on the mainframe) is to FTP an empty JCL file back to the server, in this way the server "knows" that the mainframe has uploaded the files.
Alternatively, why does the non-Z\os system need to name the file with time information? If the mainframe processes the file daily then date should be sufficient.
With this change the mainframe can reliably predict the file name for the day, generate the appropriate GET command and run.
With a job scheduler it would be easy to run the upload to the mainframe twice a day. This might address any concerns that are expressed in the desire to include a time in the file's name.

Run a Rexx step via a Background TSO step:
Background TSO step
You can then run a listcat to get all the files. You could either write the listcat output to a file and read it in or trap the output via the Address command
or the OutTrap function.
Then use the standard TSO Rename command.
Alternatively you could run ISPF background rexx program and use the ISPF equivalents to get the file name

(1) The real solution to this should be through a scheduling tool for Mainframe jobs. These tools provide capabilities to take care of formatting like the one you described.
(2) Alternatives: REXX and COBOL
(3) If you don't want to prefer REXX, here's a little brief into how you could create the JCL dynamically using COBOL:
A COBOL program that would read a "template" JCL.
Using INSPECT / REPLACE, you could substitute the prototypes with the string that is populated with the date of your choice (you could supply this as a simple SYSIN parm too, if you want the COBOL code to be flexible on the date selection)
Now that your formatted JCL is ready, you could write it to the output stream
//OUTFILE DD SYSOUT=(INTRDR,)
or
//OUTFILE DD SYSOUT=(,INTRDR)
Anything that is written to INTRDR (Internal Reader), goes straight to JES to submit your job!
Hope this helps.

Related

Rotate logfiles on an hourly basis by appending date and hour

I wanted to implement a log rotation option in linux. I have a *.trc file where all the logs are getting written. I wanted a new log file to be created every hour. I have done some analysis and found the below
I have done some analysis and got to know about the logrotate option. Where we need to update the rotation details for a specific file in the logrotate.conf file
I wanted to know if there is an option without using the logrotate option. I wanted to rotate the logfiles on an hourly basis, so something like appending date and hour information to the log file and create new files based on the current hour information.
Im looking for some suggestions on how to implement the log rotation using the second option specified above.
Any details on the above would be really helpful
If you have control over the process that creates the logs, you could just timestamp the file at the moment of creation. This will remove the need to rename the log.
Before you write every line you check the time. If one hour passed after that file was created, you close the current file and open a new one with a new timestamp.
If you do not have control over the process, you can pipe the output of your process (stdout,stderr) to multilog, which is a binary that's part of the package daemon-tools in most Linux distros.
https://cr.yp.to/daemontools/multilog.html

How to reference the most current Physical Sequential (PS) file in JCL

I wanted to create a job where I need to consider the latest file available as input file.
File format is as below: FILE1.TEST.TYYMMDD
is there any way to identify latest file based on date present in file name via JCL.
P.S. GDG versions are not created in existing process . Only PS file is created.
Thank you
I wanted to create a job where I need to consider the latest file available as input file. File [name] format is as below: FILE1.TEST.TYYMMDD is there any way to identify latest file based on date present in file name via JCL.
No.
You indicate that GDGs are not created in the existing process. GDGs would be the best way to accomplish your goal. Absent GDGs, you must write code.
You could accomplish your goal by writing (C, clist, COBOL, PL/I, Rexx) code using the LMDINIT and LMDLIST ISPF services. Then you would execute your code by running ISPF in batch. Many mainframe shops have a cataloged procedure to execute ISPF in batch.
Agree with #cschneid that there is not a platform way to handle this. However, I want to point out that GDGs are the platform way of managing PS files for access in a relative form.
Your comment
GDG versions are not created in existing process . Only PS file is
created.
That statement didn't make sense to me. GDGs are not a file type like physical sequential (PS) or partitioned (PO). It's a convention to allow relative reference to files created over time which sounds like what you want. I've only seen the use of GDGs for PS files.
Putting the date in the file name can have its uses but to z/OS its only part of the filename and not meta information that it operates on (like G0000v00's in GDGs.

Import data to OMNET++

I am trying to model a network in OMNET++. What I have is a text file (can be in an Excel file format) with nodes' names, list of interfaces, and interface connections. What I like to do is to write a program (perhaps a plug-in) to feed this file to OMNET++ and (automatically) create .ned and .cc based on this file. The rationale is that there is a long list of nodes/interfaces, that makes it difficult to do it manually, and possibly a change in the connections makes it difficult to recreate it, undelss it is done automatically. Could you point to some links/websites/documents, so that I learn how to write a plugin to read the information and create the nodes and their connections automatically? Obviously, the node types and characteristics could be modified in the plugin as necessary later.
An example is like:
(some other information there)...
cr1.atl-cr1.hst cr1.atl cr1.hst 2488
cr1.kcy-cr1.wdc cr1.kcy cr1.wdc 2488
cr1.atl-cr2.atl cr1.atl cr2.atl 10000
cr2.atl-cr1.wdc cr1.wdc cr2.atl 2488
...
where the second column is the source node, the third column is the destination node, and the first column is the link (firstNode-secondNode). The 4th column is the capacity/delay or other information of the link.
If you want this to be as flexible as possible, I would recommend writing a small Python script that reads a .csv file and renders .ned files as needed.
You might even consider using a templating engine like Mako. Quoting from its website, Mako is pretty straightforward to use:
from mako.template import Template
print(Template("hello ${data}!").render(data="world"))

How to call a bash script automatically when directory contents chage

My goal is to run a bash script automatically whenever any new file is added to a particular directory or any subdirectory of that particular directory.
Detail Scenario:
I am creating an automated process for file submission from teachers to students and vice versa. Sender will upload file and it will be stored inside the Uploads directory in the LAMP server in the format, ex. "name_course-name_filename.pdf". I want some method so that when any file stored inside the Uploads folder, the same time a script will be called and send that file to the list of receives.
From the database I can find the list of receiver for that particular course and student.
The only concern of mine is, how to call a script automatically and make it work on individual file whenever the content of the directory changes. Cron will do in intervals but not a real time work.
Linux provides a nice mechanism for that purpose which is called inotify. inotify is mostly available as a C API. But there have been developed shell utilities as well. You should use inotifywait from inotifytools (pkg name in debian) for this. Here comes a basic example:
#!/bin/bash
directory="/tmp" # or whatever you are interested in
inotifywait -m -e create "$directory" |
while read folder eventlist eventfile
do
echo "the following events happened in folder $folder:"
echo "$eventlist $eventfile"
done
Update:
If the problem goes complicated, for example you'll have to monitor recursive, dynamic directory structures, you should have a look at incron It's a cron like daemon which executes scripts on certain events. But the events are file system events rather than timer events.
There is another option to 'inotifywait':
-d --daemon
Same as --monitor, except run in the background logging events to a file
that must be specified by --outfile. Implies --syslog.
For completeness:
-m --monitor
Instead of exiting after receiving a single event, execute indefinitely.
The default behaviour is to exit after the first event occurs.
Within the do-done block of your 'while' statement, you might parse each event report for interesting details then use 'case-esac' to take action based on each event that you care about.
For something that you plan to rely on for your operations, you might also consider replacing the hard-coded '$directory' with some sort of configuration file. Such a file might include the path and filename, the interesting events for that path and file, and a script to run when those events happened.
The script might take the list of events as parameters and then 'case-esac' again.
Just one man's ramblins,
~~~ 8d;-Dan

executing script file from azure blob and write its results to file

I'll explain the task requested from me:
I have two containers in Azure, one called "data" and one called "script". In the "data" container there's a txt file with data, and in the "script" container there's a script file.
Now, I need programatically (with WorkerRole) to execute the script file, with the content of the data file as parameters (Example: a script file that accepts a string 's' and returns to the screen "Hello, 's'", when 's' in the string given, and in the data file there's a string), and save the result of the run into another file which needs to be saved in another container called "result".
How do I do all these? I've already uploaded the files and created the blobs programatically, but I can't seem to understand how to execute the file of how to save its result to another file?
Can I please have some help?
Thanks in advance
Here are the steps in pseudo code:
Retrieve the script from the blob(using DownloadToStream())
Compile the script(I will leave this to you as I have no idea what
format your script is)
Load parameters from blob(same as step 1)
Execute script with those parameters.
If your script's can be written as lambda expressions then this becomes a lot easier as you can turn them into Action's
Edit based on your questiions:
DownloadText() is no longer included in Azure Storage 2.0, you only have access to DownloadToStream(). Even if you are using an older version(say 1.7) I would recommend using DownloadToStream() in the event you ever upgrade in the future. This will prevent having to refactor your code.
In terms of executing your script, depending on what type of script it is(if it is c# code you can use this example: Is it possible to dynamically compile and execute C# code fragments?. If you need to execute a different type of script you would need to run it using Process.Start and you can look at this example: http://www.dotnetperls.com/process-start
I do not have much experience with point number 2 but those are the processes I have heard and seen used.

Resources