Logic Apps (from Azure Data Lake to Share Point) - azure

I need to copy csv file from Azure Data Lake Blob Container to Share Point document library.
I've tried to create Azure Logic Apps but I can't select columns from variable to create csv. Do you have an idea how to select columns or other way to copy csv file from ADL to SharePoint?

As I' not so clear about the details of your requirement, so I can just provide suggestions based on your description. You mentioned select columns from variable to create csv, please refer to my sample of select columns from variable(array) to create csv.
In my logic app, I have a variable which store the array like below screenshot:
If I just want to create csv with one column name and remove column mail, we can use "Select" to select the column name like below screenshot.
Then use "Create CSV table" to create the csv.

Related

Cannot convert excel to csv : Azure Synapse Analytics

I want to convert Excel to CVS in Azure Synapse Analytics but I got an error.
The error message is "Invalid excel header with empty value".
The Excel file I want to convert looks like this (created for the question) and I need to remove the blank column A when converting to csv.
I have never used ADF before so I don't know.
Can someone please tell me how to do this?
Any help would be appreciated.
sample.excel
You have to use dataflows to do that in ADF.
First create a linked service for your source data set.
Create linked service for your target folder.
My input looks like this (took from your attached sheet)
Go to the author tab of data factory and select on new dataflow.
Source settings should look like this
Source options: Point to the location where you have stored excel sheet and also select the sheetname, in my case it is sheet1 (For this example I have used Azure Blob storage)
Keep rest of the tabs as default and add a sink to your data flow.
Sink Settings should look like below
Point to the target location where you want to store your csv file (I have used Azure blob storage). Keep rest of the things on default
Go to the new pipeline and pull dataflow activity in your canvas and trigger your dataflow.
And my output in csv looks like this

Add file name to Copy activity in Azure Data Factory

I want to copy data from a CSV file (Source) on Blob storage to Azure SQL Database table (Sink) via regular Copy activity but I want to copy also file name alongside every entry into the table. I am new to ADF so the solution is probably easy but I have not been able to find the answer in the documentation and neither on the internet so far.
My mapping currently looks like this (I have created a table for output with the file name column but this data is not explicitly defined at the column level at the CSV file therefore I need to extract it from the metadata and pair it to the column):
For the first time, I thought that I am going to put dynamic content in there and therefore solve the problem this way. But there is not an option to use dynamic content in each individual box so I do not know how to implement the solution. My next thought was to use Pre-copy script but have not seen how could I use it for this purpose. What is the best way to solve this issue?
In Mapping columns of copy activity you cannot add the dynamic content of Meta data.
First give the source csv dataset to the Get Metadata activity then join it with copy activity like below.
You can add the file name column by the Additional columns in the copy activity source itself by giving the dynamic content of the Get Meta data Actvity after giving same source csv dataset.
#activity('Get Metadata1').output.itemName
If you are sure about the data types of your data then no need to go to the mapping, you can execute your pipeline.
Here I am copying the contents of samplecsv.csv file to SQL table named output.
My output for your reference:

Column names are incorrectly Mapped

I was trying to pull/load data from on-prem data lake to azure data lake using Azure Data Factory.
I was just giving query to pull all the columns. My Sink is Azure Data Lake Gen2.
But my Column names are coming wrong in source and sink.
My columns name in on-prem data lake are like user_id, lst_nm, etc. But in Azure it is like user_tbl.user_id, user_tbl.lst_nm , etc Here user_tbl is my table name.
I don't want table name getting added to columns.
Azure won't add the table name itself to the columns, can you check the output of select query that you are sending to source using preview data in ADF, that will show you the actual column names ADF is getting from source and if it doesn't have the table name prefixed then please check if your ADLS Gen 2 destination folder already have any file, if yes then remove the file and try running the pipeline again
Instead of using Copy activity, use Data flow transformation which allows you to change the Column name at destination dynamically.
Or you can also use Move and transform activity which also allows you to change column name. Refer official tutorial: Dynamically set column names in data flows
Also check ADF Mapping Data Flows: Create rules to modify column names

How to parse each row of an excel using Azure Data Factory

here is my requirement:
I have an excel with few columns in it and few rows with data
I have uploaded this excel in Azure blob storage
Using ADF I need to read this excel and parse the records in it one by one and perform an action of creating dynamic folders in Azure blob.
This needs to be done for each and every record present in the excel.
Each record in the excel has some information that is going to help me create the folders dynamically.
Could someone help me in choosing the right set of activities or data flow in ADF to do this work?
Thanks in advance!
This is my Excel file as a Source.
I have created folders in Blob storage based on Country column.
I have selected DataFlow activity.
As shown in below screenshot, Go to Optimize tab of Sink configuration.
Now select Partition option as Set Partition.
Partition type as Key.
And Unique value per partition as Country column.
Now run Pipeline.
Expected Output:-
Inside these folders you will get files with corresponding data.

Create list of files in Azure Storage and send it to sql table using ADF

I need to copy file names of excel files that are in my Azure Storage as blobs and then put these names in the SQL Server table using ADF. It can be a file path as a name of a file but the hardest thing is that in the dataset which takes all the files from one specific folder I have to select a sheet name and these sheet names are different for each file, therefore it returns an error. Is there a way to create a collective dataset without indicating the sheet name?
So, if I understand your question correctly you are looking for a way to write all Excel filenames to a SQL Database using ADF.
You can use the generic Get Metadata activity and use a binary dataset as source. Select Child items as an field to retrieve. This will retrieve all files in the folder. Then add a filter to only select the Excel file types.
Hope that this gets you on the right track.

Resources