Is there a way using REXX to edit a ps dataset and insert a string after a particular line? - mainframe

I am writing a REXX program which will update a PS dataset. I can edit a particular line using my REXX code. But I would want a code to insert a particular string after a particular line.
For Example: My PS dataset has 100 lines. I want to insert a text "ABCDE" after 44th line (in 45th line) which will increase the total lines of the file to 101 lines. The remaining lines should remain unchanged. Is this possible using REXX?

Independent of REXX you need to effectively read the old dataset and write it out to a new file and add your new record (string) to the output file and then write the rest. There is no way to “insert” a record in a Physical Sequential (PS) dataset. At the end you would delete the old and rename the newly created file to the old name.
Another option would be to use a generation dataset group (GDG) and read the current (0) and create the new (+1) as the output. This way you still are referring to the same dataset name for others to reference.

What #Hogstrom suggests is a good solution to the problem you describe. In the interest of completeness, here is a solution that may be necessary under extreme circumstances.
Create an edit macro...
/*REXX*/
ADDRESS ISREDIT 'MACRO NOPROCESS'
aLine = 'ABCDE'
ADDRESS ISREDIT 'LINE_AFTER 44 = DATALINE (ALINE)'
...and run ISPF edit in batch, executing this macro.
The JCL to run ISPF in batch is shop-specific, but many shops have created a cataloged procedure to do so.
If you are willing to copy your dataset to the z/Unix file system, you could also use sed or awk to make your changes.
I'm not recommending any of this, I'm just pointing out it can be done if #Hogstrom's solution won't work for you for some reason.

Related

Edit Input CSV file (or copy of it) as each row processed

Long story short, after a crash course in Python/BeautifulSoup, I managed to create a script to take an input text file that contains a list of URLs (1 on each line), scrape the URL, and write output to a database. There are some cases where I want an error to exit the script (including some trapped errors as well as unexpected), but as the list of URLs to scrape is pretty large, it would be handy if I could edit the input text file (or create a copy and edit that) to remove each URL as it is successfully processed. The idea being that if the script exits (by trap or crash), I'd have a list of the URLs left to be processed. Is something like this possible? I can find code samples to edit the text file, but I'm getting stuck at how to take out the row just processed.
Finally came across the post here that achieves the answer, though I'm not positive it's the most efficient way as it's reading the entire file and saving each time, but that may be the best that can be done in Python. In my case, the file is in the 1200 lines range, so it easily fits into memory.

How to delete null line in file using sqlldr, ctl

Let me know how to delete the null line using sqlldr, ctl.
And I wanna know how to remove the last two line of files.
There are null lines in tail, that is the last 1~2 line.
Plus I cannot know the last line number.
wait to reply😥
You need to either pre-process the file and remove blank lines before running sqlldr via a wrapper script, or more common, just load all rows from the file into a staging table, then call a PL/SQL script from there to load into the main table.
Pre-processing alters the main file so that is usually not a good idea unless of course you make an archive copy first.
Using a staging table is more common as that way all rows from the file are available and you can select the rows you want, transforming, validating, etc the data on the way into the main table.

How to remove the nth occurrence of a substring from each line on four 100GB files

I have 4 100GB csv files where two fields need to be concatenated. Luckily the two fields are next to each other.
My thought is to remove the 41st occurence of "," from each line and then my two fields will be properly united and ready to be uploaded to an analytical tool that I use.
The development machine is a Windows 10 machine with 4 x 3.6GHz and 64G RAM and I push the file to a server on Centos 7 system with 40 x 2.4GHz and 512G RAM. I have sudo access on the server and I can technically change the file there if someone has a solution that is dependent on Linux tools. The idea is to accomplish the task in the fastest/easiest way possible. I have to repeat this task monthly and would be ecstatic to automate it.
My original way of accomplishing this was to load the csv to MySQL, concat the fields and remove the old fields. Export the table as a csv again and push to the server. This takes two days and is laborious.
Right now I'm torn between learning to use sed or using a something I'm more familiar with like node.js to stream the files line by line into a new file and then push those to the server.
If you recommend using sed, I've read here and here but don't know how to remove the nth occurrence from each line.
Edit: Cyrus asked for a sample input/output.
Input file formatted thusly:
"field1","field2",".........","field41","field42","......
Output file formatted like so:
"field1","field2",".........","field41field42","......
If you want to remove 41st occurrence of , then you can try :
sed -i 's/","//41' file

How to manually edit the key of a KSDS VSAM file?

I have a KSDS file. I want to change the key of the file for testing purposes. I am unable to edit the key in File-Aid. Is there any way to do that ?
I have searched multiple forums, unable to find an answer.
IDCAMS REPRO to a flat file.
Edit with ISPF Edit.
Use your shop's SORT utility to ensure the file is in key order.
IDCAMS REPRO the sorted file to VSAM KSDS.
This method does not depend on third-party tools. Not every shop has File-Aid.
My recollection is that FileAid doesn't allow for updating of keys. You have to insert a new record with a new key and delete the old record. Again, my recollection is poor, but I think you can trivially do that in FileAid interactively. If you want to do it in batch, some of the other suggestions to unload from the KSDS, alter, then reload make sense.
You can change by following procedure
Copy your KSDS dataset to ESDS dataset in File-Aid.
Edit the key part of KSDS in ESDS file now, as it is not protected/key anymore.
Copy the edited ESDS file to another KSDS file by allocating the key length and index file.
This has worked for me. Suggest if any better approach is there.
Edit
Alternate Method in File-Aid.
Open the KSDS file on edit mode in File-Aid.
- Use repeat command R before the key or RR on the block of records you want to edit.
- On the newly created repetitive record(s) you can edit as per your wish on the key area.
- After editing the new repetitive record(s), delete the original record(s).
- Use SORT command on the command line to sort the sequence of the keys. (This will prevent from any key sequence error while editing by copying to a PS file or ESDS file.)
- Use SAVE command to save the edited VSAM.

Writing updated data to an existing file in Python

The data file I am working with takes the format:
6345 Alfonso Chavez 98745.35
2315 Terry Kulakowski 234.0
4455 Yu Chen 78000.0
What I am trying to do is replace the balance(the last item in the line) with an updated balance which I have generated in my code. I'm not sure how to do this with an existing file without wiping the entire thing first, which is obviously not what I want. I was thinking a for loop to iterate over the line and split it into separate list elements, but that will update every users balance instead of the specific persons. Any help is appreciated.
If this is a text file, there is no great way of doing this. In general it's probably impossible/super hard to save changes in a text file without saving/rewriting the whole text file. Instead, what you should be focusing on is the fact that you need O(n) time to loop through the entire file looking for the specific person.
Having said all that, python module fileinput seems like a good way to do this. See this. You can set inplace=True to make it seem like you are just changing that single line in place.
But this is still O(n). It's just secretly rewriting the whole file for you behind your back.
Also some other solutions discussed here previously.

Resources