How to read RPGLE source code into a variable in another program - rpgle

How can I read RPG source code into a variable in another RPG program? This is so I can analyze the code to edit it.

If your source member is in a FILE/MEMBER you can open and read it with SQL. You have to use an alias because SQL can't directly select from a multi-member file.
CREATE ALIAS lib/youralias FOR lib/filesource (sourcemember);
use a SQLRPGLE with a cursor to read line by line :
SELECT * FROM lib/youralias;
If your source is in the IFS, you can also read it with SQL + cursor :
SELECT * FROM TABLE(QSYS2.IFS_READ('/home/dir/yoursource.rpg'))

If you don't want to use SQL, you can also read a source file like any other database file in RPG. To read a specific member, you can use an override, or you can use the EXTMBR() keyword on the F spec.
Here is a link to the docs

Related

Copy CSV File with Multiline Attribute with Azure Synapse Pipeline

I have a CSV File in the Following format which want to copy from an external share to my datalake:
Test; Text
"1"; "This is a text
which goes on on a second line
and on on a third line"
"2"; "Another Test"
I do now want to load it with a Copy Data Task in an Azure Synapse Pipeline. The result is the following:
Test; Text
"1";" \"This is a text"
"which goes on on a second line";
"and on on a third line\"";
"2";" \"Another Test\""
So, yo see, it is not handling the Multi-Line Text correct. I also do not see an option to handle multiline text within a Copy Data Task. Unfortunately i'm not able to use a DataFlow Task, because it is not allowing to run with an external Azure Runtime, which i'm forced to use, due to security reasons.
In fact, i'm of course not speaking about this single test file, instead i do have x thousands of files.
My settings for the CSV File look like follows:
CSV Connector Settings
Can someone tell me how to handle this kind of multiline data correctly?
Do I have any other options within Synapse (apart from the Dataflows)?
Thanks a lot for your help
Well turns out this is not possible with a CSV File.
The pragmatic solution is to use "binary" files instead, to transfer the CSV Files and only load and transform them later on with a Python Notebook in Synapse.
You can achieve this in azure data factory by iterating through all lines and check for delimiter in each line. And then, use string manipulation functions with set variable activities to convert multi-line data to a single line.
Look at the following example. I have a set variable activity with empty value (taken from parameter) for req variable.
In lookup, create a dataset with following configuration to the multiline csv:
In foreach, where I iterate each row by giving items value as #range(0,sub(activity('Lookup1').output.count,1)). Inside for each, I have an if activity with following condition:
#contains(activity('Lookup1').output.value[item()]['Prop_0'],';')
If this is true, then I concat the current result to req variable using 2 set variable activities.
temp: #if(contains(activity('Lookup1').output.value[add(item(),1)]['Prop_0'],';'),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],decodeUriComponent('%0D%0A')),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' '))
actual (req variable): #variables('val')
For false, I have handled the concatenation in the following way:
temp1: #concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' ')
actual1 (req variable): #variables('val2')
Now, I have used a final variable to handle last line of the file. I have used the following dynamic content for that:
#if(contains(last(activity('Lookup1').output.value)['Prop_0'],';'),concat(variables('req'),decodeUriComponent('%0D%0A'),last(activity('Lookup1').output.value)['Prop_0']),concat(variables('req'),last(activity('Lookup1').output.value)['Prop_0']))
Finally, I have taken copy data activity with a sample source file with 1 column and 1 row (using this to copy our actual data).
Now, take source file configuration as shown below:
Create an additional column with value as final variable value:
Create a sink with following configuration and select mapping for only above created column:
When I run the pipeline, I get the data as required. The following is an output image for reference.

Azure Data Factory removing spaces from column names of csv file

I'm a bit new to azure data factory so apologies if I'm missing anything obvious. I've done several searches and I can't find anything that quite fits.
So the situation is that we have an existing pipeline that will take the path to a csv file and pass this in as a delimited data set. As a sink it is using a parquet data set. This is a generic process that we can pass any delimited file into and it will output it as parquet.
This has been working well but now we have started receiving files with spaces and special characters in the header which causes the output to parquet to fail. Unfortunately we don't have control over the format of the files we receive so I can't handle this at source.
What I would like to do is on ingestion of the file replace any spaces and other special characters in the header with an underscore. If I were doing this on premise I could quickly create a powershell script to do it. I had thought about creating a custom task in AFD to call a powershell script to do this in the blob storage but that seems more complicated than it should be. Is there something else I can do to get this process working while keeping it generic?
As #Joel Cochran mentioned, you can use the below expression in Select transformation to replace space and special characters in the header.
regexReplace($$,'[^a-zA-Z]','_')
Source:
In Select transformation, remove the auto mappings and add new rule base mapping to use this expression.
preview:
You can change the output filename not directly in the Copy activity, assuming you are using this activity.
The workaround is to use a parameter for the filename output that you can cleanup.
You can use the Get Metadata activity to get all filenames from the source csv files.
Then loop over these files with a foreach activity.
Within the foreach activity you can set the output filename with the new name with the cleaned value.
The function could look like this:
#replace(item().name, ' ', '_')
More information on the replace function

How to read values from property file using innosetup?

I'm trying to read values from demo.properties file using innnosetup.Here is my demo.properties file hibernate.connection.username=James
hibernate.connection.password=Jack
hibernate.connection.url=jdbc:jtds:sqlserver://8080/clientDB
hibernate.connection.driver_class=net.sourceforge.jtds.jdbc.Driver
I want to read this file and show values as James,Jack and 8080 in user interface.
Can anybody guide me how to get only those particular values?
Here's a guide:
Use Pascal-Script in the [code] section
Work with LoadStringsFromFile() (http://www.jrsoftware.org/ishelp/index.php?topic=isxfunc_loadstringsfromfile) to get the file content.
Iterate over the lines using a for loop
Use Pos() (http://www.jrsoftware.org/ishelp/topic_isxfunc_pos.htm) to find the position of the = sign
Use Copy() (http://www.jrsoftware.org/ishelp/topic_isxfunc_copy.htm) extract key and value from the current line using the position of =
In case you need more functions take a look here: http://www.jrsoftware.org/ishelp/index.php?topic=scriptfunctions
A problem close to yours has been solved here: Find and read specific string from config file with Pascal Script in Inno Setup

Passing data into perl script from command line

I have a perl script the creates a report based on an xml definition. Currently these definitions all exist as .xml files.
So I have the script run-report.pl, which can take a path to a definition file and create the report.
Now I want to create run-reports-from-db.pl, which will generate the report definition based on same database entries. I don't want to create temp files to pass to run-report.pl, I would just like to pass in the definition somehow.
So instead of saying:
run-report.pl -def=./path/to/def.xml
I want to be able to say:
run-report.pl --stream
And have the report definition available in <STDIN>
I am sure there is pretty trivial way to do this???
If I understand your question correctly, all you need is one | (pipe).
./generate-xml-from-db.pl | ./run-report.pl --stream
Anything the first process in the pipeline prints to stdout will appear in the second process's stdin.
As long as you read from STDIN, you have it available. Notice what happens with you take the code below name it something like echo.pl run it at the command line and paste reams of text.
#!/usr/bin/perl -w
use 5.010;
use strict;
use warnings;
while ( <> ) {
say;
}
<> is the Perl shorthand for "read from STDIN".
As long as the method you're using to launch the process has a way to get a hold of the standard input and outputs, you can just write it to that handle. You have to use the ways that are available to you. In Java, for example, you'd have to get the input stream of the process, in a batch command you have to pipe it. At a GUI terminal you can cut and paste.

How to Create new Excel file out from seleniumRC result?

I'm creating a test case using selenium RC.
The General flow of the process is to get data from an Excel File.
Then run the SeleniumRC.
To get the result I place a printout command. All output is visible in the console. (using eclipse)
What I need to do next is to store all this data into a new Excel File. Can anyone help me on how to generate my output into an excel file.
Store the result in variables and then write them into the excel using the Java Excel API.
Use Jexcel API.This link may be useful to you.
http://r4r.co.in/java/apis/jexcel/
A simply way I will suggest is to append all the printout statements in a storage variable like string buffer(Java) with delimiters like | symbols at the end of each line and then write this information into a text file.(line by line by split with the help of | symbols)
If you want to store this information in excel you will have to use the Apache POI Project to write data into excel sheets.(http://poi.apache.org/)

Resources