Record count and Filename from VSAM Source EBCDIC encoding in Informatica PowerCenter - mainframe

I need to get the filename and then count the number of records in that VSAM source file in Informatica PowerCenter 10. I had already done similar thing with other flat files in which from the source instance I can get the file name by checking "CurrentlyProcessedFileName" option. Whereas for VSAM source, there is no option in there.
Please see the below snaps for reference.
UPDATE (ROW COUNT):

Related

Getting error in storing the filename in the Azure Dataflow?

I am getting one excel file in data lake and I am exporting the excel file into Azure SQL database using Dataflow in the ADF.
I need to store the filename as a column in my data. I am following the below steps:
I am giving the column name called "filename" in the Column to store the file name section.
I can able to see the entire columns and my new column "filename" in the projection and inspect section. However, when I tried to see the preview data, I am getting the below error
Not sure what is the issue? I changed the column name but no success. Could anyone advise what is the issue?
As the error message states, there must be a column name Filename which already exists in the source file schema.
Your settings look correct and if you are facing the same error after changing the column name, try to refresh your dataset or remove and add the source dataset again. This can refresh your source schema if any to the latest.
If you have a file inside a subfolder, the filename column extracts the full file path which includes subfolders and filename (ex: /subfolder/filename).
In this case, extract only filename from the filename column using derived column transformation.

split the file by their transaction date though ADF

By using ADF we unloaded data from on-premise sql server to datalake folder in single parquet for full load.
Then in delta load we are keeping in current day's folder yyyy/mm/dd structur going forward.
But i want full load file also separate it by their respective transaction day's folder.ex: in full load file we have 3 years data. i want data split it by transaction day wise in each separate folder. like 2019/01/01..2019/01/02 ..2020/01/01 instead of single file.
is there way to achieve this in ADF or while unloading itself can we get this folder structure for full load?
Hi#Kumar AK After a period of exploration, I found the answer. I think we need to use Azure data flow to achieve that.
My source file is a csv file, which contains transaction_date column.
Set this csv as the source in data flow.
In DerivedColumn1 activity, we can generate a new column FolderName via column transaction_date. FolderName will be used as a folder structure.
In sink1 activity, select Name file as column data as File Name option, select FolderName column as Column data.
That's all. These rows of the csv file will be split into files in different folders. The debug result is as follows, :

Power BI getting new data from a file: how could this be done better?

I have a Power BI in my local folder.
I have inputs, excel files.
When I have a new input with new data, in the form of a new excel file, I only care about these new data. I do not care about anything data in the past file.
Currently, when I receive a new excel input:
I change the name of the excel file giving it the same name as the file that it will replace.
I suppress the old excel file in the folder where it is stored
I replace it with the new excel file, with the name of the old one in the folder of the old excel file
To provide a concrete example, here is my folder with two files and my power bi:
If I have a new file with corresponding to the "Rabbit employed by Batman", I supress this excel file in the folder.
I change the name of my new excel input, calling it "Rabbit employed by Batman".
I replace it in the folder with the Power BI.
I feel that this might not be very clever, and I wonder if there is a better way to proceed.
In Power Query use excel from folder, sort by date and choose the latest one, If you have more than one file with the same name, create an Index, rank over filename sort descending and use the number 1 as your binary source.

Excel: Getting data from a daily replaced excel file

I have an location in one drive for business where an .xls file is getting daily replaced via flow automation. The data structure, columns is the same. What I want is to create an excel online workbook that would get its data from that daily replaced xls. I tried once but as soon as the source file got replaced and I clicked on Refresh all under data, the operation ended in error. Any ideas?
You can use Power Query in that scenario. Depending on the exact circumstances, you could
Get data from Folder
Filter the folder to show only files that contain '.xls' in the file name
If after that you still have more than one file, sort them by date modified and keep only the newest one.
Then process that one remaining file.

compare the data between two variables and generate a report in to the text file

my requirement is transfer the data from source data base to the target database
job1.
sourcedatabase:oracle. target
table1 target1.lst
table2 table2.lst
table3 table3.lst
this part i done successfully.
job 2.
now i want to count the number of records source database and target database
this part also done successfully.
job3: ...........(this part only i am lacking)
i kept the record count between source and target in variable as well as text file
now tell me how to compare the the values in a variable or a text file(these values are find by using select count(*) from table and wc -l $filename.) that i may find the loading process done successfully or not and also i want maintain a log file also
please enhance me how to compare the values in a text file or a variable so that i can maintain a log file to generate a report maintain in a text file.
It's not very clear where these text files came from and why to compare to them. Why don't you store the counts in the database in first place (instead of/in addition to writing them to a file).
text file or a variable
What variable? In Oracle PL/SQL you can compare variables using =, !=, is null, is not null, etc. In any other programming language: there are comparison operators too.

Resources