Printing records from a VSAM KSDS shows characters with periods - mainframe

I am trying to view the contents of a VSAM dataset using the following jcl code:
//REPRO2 JOB REPROJCL
//IDCAMS EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=A
//SYSIN DD *,SYMBOLS=EXECSYS
PRINT -
INDATASET(ZXP.TEST.GAS.PRODPROC) -
CHAR
/*
However the data has periods for example:
0KEY OF RECORD - 94...-...6-594
94...-...6-594*Y..1ZS0.UGV...==
0KEY OF RECORD - 94...-...4-521
94...-...4-521*Y...Y2..LVJ1Y...
0KEY OF RECORD - 97...-...0-101
How can I view the data without periods, does printing in hex then converting to ascii/ebcdic work?

You should use the DUMP option on the PRINT statement instead of CHAR. That will print the records in hexadecimal dump format so you can see the binary values.

Related

Python efficient way to search for a pattern in text file

I need to find a pattern in a text file, which isn't big.
Therefore loading the entire file into RAM isn't a concern for me - as advised here:
I tried to do it in two ways:
with open(inputFile, 'r') as file:
for line in file.readlines():
for date in dateList:
if re.search('{} \d* 1'.format(date), line):
OR
with open(inputFile, 'r') as file:
contents = file.read()
for date in dateList:
if re.search('{} \d* 1'.format(date), contents):
The second one proved to be much faster.
Is there an explanation for this, other than the fact that I am using one less loop with the second approach?
As pointed out in the comments, the two codes are not equivalent as the second one only look for the first match in the whole file. Besides this, the first is also more expensive because the (relatively expensive) format over all dates is called for each line. Storing the regexp and precompiling them should help a lot. Even better: you can generate a regexp to match all the dates at once using something like:
regexp = '({}) \d* 1'.format('|'.join('{}'.format(date) for date in dateList))
with open(inputFile, 'r') as file:
contents = file.read()
# Search the first matching date existing in dateList
if re.search(regexp, contents):
Note that you can use findall if you want all of them.

nodejs best way to convert an input file to json format

I have an input file.txt that takes the following format
Order: Order1
from customerA to customerB
orderItemA
orderItemB
Order: Order2
from customerC to customerD
orderItemC
orderItemD
I want to convert this total input into a json and then start processing it. At the moment all I can think of is to read this line by line and then start to process it, but is there any node package or an alternative way in which I can read the whole file in one go and convert it to a JSON format ? I found the following but did not work https://www.npmjs.com/package/plain-text-data-to-json

Can you remove a random string of commas and replace with one for exporting to CSV

I am using Netmiko to extract some data from Cisco Switches and Routers. I would like to put that data in to a spread sheet. For example show cdp neighbour would give me string with random white space in
Port Name Status Vlan Duplex Speed Type
Et0/0 connected 1 auto auto unknown
Et0/1 connected 1 auto auto unknown
Et0/2 connected routed auto auto unknown
Et0/3 connected 1 auto auto unknown
I thought i could remove it and replace with , but i get this
Port,,,,,,Name,,,,,,,,,,,,,,,Status,,,,,,,Vlan,,,,,,,Duplex,,Speed,Type
Et0/0,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Et0/1,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Et0/2,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,routed,,,,,,,auto,,,auto,unknown
Et0/3,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Any way of extracting data like the above. Ideally to go straight in to a structured table in excel (Cells and Rows) or anyway to do what i did and then replace repeating , with just one so i can export to CSV and then import to Excel. I may be the most long winded person you have ever seen because i am so new to prgramming :)
I'd go with regex matches which are more flexible. You can adapt this to your needs. I put the data in a list for testing, but you could process 1 line at a time instead.
Here's the file (called mydata.txt)
Et0/0,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Et0/1,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Et0/2,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,routed,,,,,,,auto,,,auto,unknown
Et0/3,,,,,,,,,,,,,,,,,,,,,,,,connected,,,,1,,,,,,,,,,,,auto,,,auto,unknown
Here's how to read it and write the result to a csv file (mydata.csv)
import re
_re = re.compile('([^,]+)+')
newfile = open(r'mydata.csv', 'w')
with open(r'mydata.txt') as data:
for line in data.readlines():
newfile.write(','.join(f for f in _re.findall(line)))
newfile.close()
And here is the output
Et0/0,connected,1,auto,auto,unknown
Et0/1,connected,1,auto,auto,unknown
Et0/2,connected,routed,auto,auto,unknown
Et0/3,connected,1,auto,auto,unknown
Explanation:
The re library allows the use of regular expressions for parsing
text. So the first line imports it.
The second line specifies the regular expression to extract anything
that is not a comma, but it is only a specification. It doesn't
actually do the extraction
The third line opens the output file, with 'w' specifying that we
can write to it. The next line opens the input file. The file is
reference by the name 'newfile'
The fourth line reads each line from the input file one at a time.
The fifth line is an all-at-once operation to separate the non-comma
parts of the input, join them back together separated by commas, and
write the resulting string to the output file.
The last line closes the output file.
I hope I didn't misunderstand you. To turn that repeating commas to one single comma, just run this code with your string s:
while ",," ins s:
s = s.replace(",,", ",")

Automated appending of csv files in linux with timestamp and id

I am trying to append data from File1 to File2 (both csvs) in ubuntu Linux.
The data is to be appended every minute added with a timestamp and an auto-increasing id number (to be used as pk in a mysql db).
I have set up a crontab job:
* * * * * cat File1>>File2
It works perfectly, however i am a bit stuck with adding the two new columns to File2 that are to be auto populated.
I am a bit of a novice at linux and would appreciate some help.

Azure Machine learning - Strip top X rows from dataset

I have a plain text csv file, which i am trying to read in Azure ML studio - the file format is pretty much like this
Geolife trajectory
WGS 84
Altitude is in Feet
Reserved 3
0,2,255,My Track,0,0,2,8421376
0
39.984702,116.318417,0,492,39744.1201851852,2008-10-23,02:53:04
39.984683,116.31845,0,492,39744.1202546296,2008-10-23,02:53:10
39.984686,116.318417,0,492,39744.1203125,2008-10-23,02:53:15
39.984688,116.318385,0,492,39744.1203703704,2008-10-23,02:53:20
39.984655,116.318263,0,492,39744.1204282407,2008-10-23,02:53:25
39.984611,116.318026,0,493,39744.1204861111,2008-10-23,02:53:30
The real data starts from Line 7, how could i strip it off, these files need to be downloaded on the fly so I don't think i would like to strip off the data by some code.
What is your source location - SQL or Blob or http?
If SQL, then you can use query to start from line 6.
If Blob/http, I would suggest reading a file as a single column TSV format, use simple R/Python script to drop first 6 rows and convert to csv

Resources