Copy information CSV Files to SQL Database (Azure Data Factory) - azure

I'm new to using Data Factory and what I want to do is to copy the information from several CSV files (storage accounts) to a SQL Server database to the respective tables already created. If for example I have 4 CSV files there should be 4 tables.
I have been testing some activities, for example the "Copy Data", but that would cause me to create the same amount of datasets and if for example there were 15 tables, that would be too many datasets.
I want to make it dynamic but I can't figure out how to do it.
How do you suggest me to do this, any example please, thanks.

Either, you can use wildcard to read all files together which are under same blob container.
Once, you read that then in mapping you can add identifier to determine which columns belongs to which table through which you can identify and import data into resp tables OR You can use foreach loop to read all files from blob container and import data into resp table based on file name.
I had created below article to copy data from sql database to sql database for beginners, you can refer it for some initial level setting.
Azure Data Factory (ADF) Overview For Beginners

Related

Is there any easy way to move multiple tables in Oracle which has around 5TB data to ADLS

I have a requirement where I need to move data from multiple tables in Oracle to ADLS.
The size of data is around 5TB. These files in ADLS, I might use it in future to connect Power BI.
Is their any easy and efficient way to do this.
Thanks in Advance !
You can do this by using Lookup activity and ForEach in Azure Data Factory.
Create a table or file to store the list of table names which needs to be extracted.
Use Lookup Activity get the tables list.
Pass the list to ForEach activity and by looping each table copy the current item() from oracle to ADLS.
In ForEach, settings->Items, add the following code in the Add Dynamic Content.
#activity('Get-Tables').output.value
Add a Copy activity inside ForEach activity.
In Copy data activity, source > Query and Input the following code:
SELECT * FROM #{item().Table_Name}
Now add the sink dataset(ADLS) and Execute your pipeline.
Please refer Microsoft Documentation to know about the creation of linked services for Oracle.
Please go through this article by Sean Forgatch in MODERN DATA ENGINEERING if you face any issues in the process.

How to bulk copy multiple csv files from blob to mutliple sql database tables with data factory

I am trying copy different csv files in blob storage into there very own sql tables(I want to auto create these tables). Ive seen alot of questions but I haven't seen any that answer this.
Currently I have a getmetadata function that grabs a list of child items to get the name of the files and a foreach loop but from there I don't know how to have them sent to different tables per file.
Updated:
When I run it for a 2nd time. It will add new rows into the table.
I created a simple test and it works well. This is my csv file stored in Azure Data Lake.
Then we can use pipeline to copy this csv file into Azure SQL table(auto create these tables).
At GetMetaData1 activity, we can set the dataset of the folder containing csv files
And select First row as header at the dataset.
2.At ForEach1 activity we can foreach the file list via expression #activity('Get Metadata1').output.childItems.
3.Inside ForEach1 activity, we can use Copy data1 activity with the same data source as GetMetaData1 activity. At source tab, we can type in dynamic content #item().name. We can use #item().name to get the file name.
At sink tab, we should select Auto create table.
In the Azure SQL dataset, we should type in schema name and dynamic content #replace(item().name,'.csv','') as its table name. Because this information is needed to create a table dynamically.
The debug result is as follows:

Ingesting a CSV file thru Polybase without knowing the sequence of columns

I am trying to ingest a few CSV files from Azure Data Lake into Azure Synapse using Polybase.
There is a fixed set of columns in each CSV file and the column names are given on the first line. However, the columns can come in different ordering sequence.
In Polybase, I need to declare external table which I need to know the exact sequence of columns during design time and hence I cannot create the external table. Are there other ways to ingest the CSV file?
I don't believe you can do this directly with Polybase because as you noted the CREATE EXTERNAL TABLE statement requires the column declarations. At runtime, the CSV data is then mapped to those column names.
You could accomplish this easily with Azure Data Factory and Data Flow (which uses Polybase under the covers to move the data to Synapse) by allowing the Data Flow to generate the table. This works because the table is generated after the data has been read rather than before as with EXTERNAL.
For the sink Data Set, create it with parameterized table name [and optionally schema]:
In the Sink activity, specify "Recreate table":
Pass the desired table name to the sink Data Set from the Pipeline:
Be aware that all string-based columns will be defined as VARCHAR(MAX).

Copy Blob Data To Sql Database in Azure Data Factory with Conditions

I am performing a a trigger based pipeline to copy data from blob storage to SQL database. In every blob file there are bunch of JSONs from which I need to copy just few of them and I can differenciate them on the basis of a Key-value pair present in every JSON.
So How to filter those JSON containing that Value corresponding to a common key?
One Blob file looks like this. Now While the copy activity is happening ,it should filter data according to the Event- Name: "...".
Data factory in general only moves data, it doesnt modify it. What you are trying to do might be done using a staging table in the sink sql.
You should first load the json values as-is from the blob storage in the staging table, then copy it from the staging table to the real table where you need it, applying your logic to filter in the sql command used to extract it.
Remember that sql databases have built in functions to treat json values: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-2017
Hope this helped!
At this time we do not have an option for the copy activity to filter the content( with an exception of sql source ) .
In your scenario it looks like that already know which values needs to omitted , on way to go will be have a "Stored Procedure" activity , after the copy activity which will be just delete the values which you don't want from the table ,this should be easy to implement but depending on the volume of data it may lead to performance issues . The other option is to have the JSON file cleaned on the storage side before it is ingested .

What is the difference between using a COPY DATA activity to a SQL table vs using CREATE EXTERNAL TABLE?

I have a bunch of U-SQL activities that manipulates & transform data in an Azure Data Lake. Out of this, I get a csv file that contains all my events.
Next I would just use a Copy Data activity to copy the csv file from the Data Lake directly into an Azure SQL Data Warehouse table.
I extract the information from a bunch of JSON files stored in the Data Lake and create a staging .csv file;
I grab the staging .csv file & a production .csv file and inject the latest change (and avoid duplicates) and save the production .csv file;
Copy the .csv production file directly to the Warehouse table.
I realized that my table contains duplicated rows and, after having tested the U-SQL scripts, I assume that the Copy Data activity -somehow- merges the content of the csv file into the table.
Question
I am not convinced I am doing the right thing here. Should I define my warehouse table as an EXTERNAL table that would get its data from the .csv production file? Or should I change my U-SQL to only include the latest changes?
If you want to use external tables depends on your use case. If you want the data to be stored inside SQL DW for better performance, you have to copy it at some point, e.g. via a stored procedure. You could then just call the stored procedure from ADF, for instance.
Or, if you don't want to / cannot filter out data beforehand, you could also implement an "Upsert" stored procedure in your SQL DW and call this to insert your data instead of the copy activity.

Resources