speed up copy task in azure Data factory - azure

I have a Copy job should copy 100 GB of excel files between two Azure DataLake.
"properties": {
"activities": [
{
"name": "Copy Data1",
"type": "Copy",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"typeProperties": {
"source": {
"type": "AzureDataLakeStoreSource",
"recursive": true,
"maxConcurrentConnections": 256
},
"sink": {
"type": "AzureDataLakeStoreSink",
"maxConcurrentConnections": 256
},
"enableStaging": false,
"parallelCopies": 32,
"dataIntegrationUnits": 256
},
"inputs": [
{
"referenceName": "SourceLake",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "DestLake",
"type": "DatasetReference"
}
]
}
],
my throughput is about 4 MB/s. As I read here it should be 56 MB/s. What should I do to reach this throughput?

You can use the Copy actives Performance tuning to help you tune the performance of your Azure Data Factory service with the copy activity.
Summary:
Take these steps to tune the performance of your Azure Data Factory service with the copy activity.
Establish a baseline. During the development phase, test your pipeline by using the copy activity against a representative data sample. Collect execution details and performance characteristics following copy activity monitoring.
Diagnose and optimize performance. If the performance you observe doesn't meet your expectations, identify performance bottlenecks. Then, optimize performance to remove or reduce the effect of bottlenecks.
In some cases, when you run a copy activity in Azure Data Factory, you see a "Performance tuning tips" message on top of the copy activity monitoring page, as shown in the following example. The message tells you the bottleneck that was identified for the given copy run. It also guides you on what to change to boost copy throughput.
Your file is about 100 GB size. But test files for file-based stores are multiple files with 10 GB in size. The performance may be different.
Hope this helps.

Related

Azure Data Factory V2 - Input and Output

I'm trying to reproduce the following architecture based on the following github repo: https://github.com/Azure/cortana-intelligence-price-optimization
The problem is the part linked to the ADF, since in the guide it uses the old version of ADF: I don't know how to map in ADF v2 the "input" and "output" properties of a single activity so that they point to a dataset.
The pipeline performs a spark activity that does nothing more than execute a python script, and then I think it should write data into the dataset I defined already.
Here is the json of the ADF V1 pipeline inside the guide, which I cannot replicate:
"activities": [
{
"type": "HDInsightSpark",
"typeProperties": {
"rootPath": "adflibs",
"entryFilePath": "Sales_Data_Aggregation_2.0_blob.py",
"arguments": [ "modelsample" ],
"getDebugInfo": "Always"
},
"outputs": [
{
"name": "BlobStoreAggOutput"
}
],
"policy": {
"timeout": "00:30:00",
"concurrency": 1,
"retry": 1
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "AggDataSparkJob",
"description": "Submits a Spark Job",
"linkedServiceName": "HDInsightLinkedService"
},
The Spark activity in a Data Factory pipeline executes a Spark program on your own or on-demand HDInsight cluster. This article builds on the data transformation activities article, which presents a general overview of data transformation and the supported transformation activities. When you use an on-demand Spark linked service, Data Factory automatically creates a Spark cluster for you just-in-time to process the data and then deletes the cluster once the processing is complete.
Upload "Sales_Data_Aggregation_2.0_blob.py" to storage account attached to the HDInsight cluster and the modify the sample definition of a spark activity and create a schedule trigger and run the code:
Here is the sample JSON definition of a Spark activity:
{
"name": "Spark Activity",
"description": "Description",
"type": "HDInsightSpark",
"linkedServiceName": {
"referenceName": "MyHDInsightLinkedService",
"type": "LinkedServiceReference"
},
"typeProperties": {
"sparkJobLinkedService": {
"referenceName": "MyAzureStorageLinkedService",
"type": "LinkedServiceReference"
},
"rootPath": "adfspark",
"entryFilePath": "test.py",
"sparkConfig": {
"ConfigItem1": "Value"
},
"getDebugInfo": "Failure",
"arguments": [
"SampleHadoopJobArgument1"
]
}
}
Hope this helps.

How to use Azure Data Factory for importing data incrementally from MySQL to Azure Data Warehouse?

I'm using Azure Data Factory to periodically import data from MySQL to Azure SQL Data Warehouse.
The data goes through a staging blob storage on an Azure storage account, but when I run the pipeline it fails because it can't separate the blob text back to columns. Each row that the pipeline tries to insert into the destination becomes a long string which contains all the column values delimited by a "⯑" character.
I used Data Factory before, without trying the incremental mechanism, and it worked fine. I don't see a reason it would cause such a behavior, but I'm probably missing something.
I'm attaching the JSON that describes the pipeline with some minor naming changes, please let me know if you see anything that can explain this.
Thanks!
EDIT: Adding exception message:
Failed execution Database operation failed. Error message from
database execution :
ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Com‌​mon.Shared.HybridDel‌​iveryException,Messa‌​ge=Error
happened when loading data into SQL Data
Warehouse.,Source=Microsoft.DataTransfer.ClientLibrary,''Typ‌​e=System.Data.SqlCli‌​ent.SqlException,Mes‌​sage=Query
aborted-- the maximum reject threshold (0 rows) was reached while
reading from an external source: 1 rows rejected out of total 1 rows
processed.
(/f4ae80d1-4560-4af9-9e74-05de941725ac/Data.8665812f-fba1-40‌​7a-9e04-2ee5f3ca5a7e‌​.txt)
Column ordinal: 27, Expected data type: VARCHAR(45) collate SQL_Latin1_General_CP1_CI_AS, Offending value:* ROW OF VALUES
* (Tokenization failed), Error: Not enough columns in this
line.,},],'.
{
"name": "CopyPipeline-move_incremental_test",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "$$Text.Format('select * from [table] where InsertTime >= \\'{0:yyyy-MM-dd HH:mm}\\' AND InsertTime < \\'{1:yyyy-MM-dd HH:mm}\\'', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlDWSink",
"sqlWriterCleanupScript": "$$Text.Format('delete [schema].[table] where [InsertTime] >= \\'{0:yyyy-MM-dd HH:mm}\\' AND [InsertTime] <\\'{1:yyyy-MM-dd HH:mm}\\'', WindowStart, WindowEnd)",
"allowPolyBase": true,
"polyBaseSettings": {
"rejectType": "Value",
"rejectValue": 0,
"useTypeDefault": true
},
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "column1:column1,column2:column2,column3:column3"
},
"enableStaging": true,
"stagingSettings": {
"linkedServiceName": "StagingStorage-somename",
"path": "somepath"
}
},
"inputs": [
{
"name": "InputDataset-input"
}
],
"outputs": [
{
"name": "OutputDataset-output"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 10,
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "Activity-0-_Custom query_->[schema]_[table]"
}
],
"start": "2017-06-01T05:29:12.567Z",
"end": "2099-12-30T22:00:00Z",
"isPaused": false,
"hubName": "datafactory_hub",
"pipelineMode": "Scheduled"
}
}
It sounds like what your doing is right, but the data is poorly formed (common problem, none UTF-8 encoding) so ADF can't parse the structure as you require. When I encounter this I often have to add a custom activity to the pipeline that cleans and prepares the data so it can then be used in a structured way by downstream activities. Unfortunately this is a big be overhead in the development of the solution and will require you to write a C# class to deal with the data transformation.
Also remember ADF has none of its own compute, it only invokes other services, so you'll also need an Azure Batch Service to execute to compiled code.
Sadly there is no magic fix here. Azure is great to Extract and Load your perfectly structured data, but in the real world we need other services to do the Transform or Cleaning meaning we need a pipeline that can ETL or I prefer ECTL.
Here's a link on create ADF custom activities to get you started: https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Hope this helps.
I've been struggeling with the same message, sort of, when importing from Azure sql db to Azure DWH using Data Factory v.2 using staging (which implies Polybase). I've learned that Polybase will fail with error messages related to incorrect data types etc. The message I've received is much similar to the one mentioned here, even though I'm not using Polybase directly from SQL, but via Data Factory.
Anyways, the solution for me was to avoid NULL values for columns of decimal or numeric type, e.g. ISNULL(mynumericCol, 0) as mynumericCol.

Execute U-SQL script in ADL storage from Data Factory in Azure

I have a USQL script stored on my ADL store and I am trying to execute it. the script file is quite big - about 250Mb.
So far i have a Data Factory, I have created a Linked Service and am trying to create a Data lake Analytics U-SQL Activity.
The code for my U-SQL Activity looks like this:
{
"name": "RunUSQLScript1",
"properties": {
"description": "Runs the USQL Script",
"activities": [
{
"name": "DataLakeAnalyticsUSqlActivityTemplate",
"type": "DataLakeAnalyticsU-SQL",
"linkedServiceName": "AzureDataLakeStoreLinkedService",
"typeProperties": {
"scriptPath": "/Output/dynamic.usql",
"scriptLinkedService": "AzureDataLakeStoreLinkedService",
"degreeOfParallelism": 3,
"priority": 1000
},
"policy": {
"concurrency": 1,
"executionPriorityOrder": "OldestFirst",
"retry": 3,
"timeout": "01:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1
}
}
],
"start": "2017-05-02T00:00:00Z",
"end": "2017-05-02T00:00:00Z"
}
}
However, I get the following error:
Error
Activity 'DataLakeAnalyticsUSqlActivityTemplate' from >pipeline 'RunUSQLScript1' has no output(s) and no schedule. Please add an >output dataset or define activity schedule.
What i would like is to have this Activity run on-demand, i.e. I do not want it scheduled at all, and also I do not understand what Inputs and Outputs are in my case. The U-SQL Script I am trying to run is operating on millions of files on my ADL storage and is saving them after some modifiction of the content.
Currently ADF does not support running USQL script stored in ADLS for a USQL activity, i.e. the "scriptLinkedService" under "typeProperties" has to be an Azure Blob Storage Linked Service. We will update the documentation for USQL activity to make this more clear.
Supporting running USQL script stored in ADLS is on our product backlog, but we don't have a committed date for this yet.
Shirley Wang
Currently ADF does not support executing the activity on-demand and it needs to be configured with a schedule. You will need at least one output to drive the schedule execution of the activity. The output can be a dummy Azure Storage one without actually write the data out but ADF leverages the availability properties to drive the schedule execution. For example:
{
"name": "OutputDataset",
"properties": {
"type": "AzureBlob",
"linkedServiceName": "AzureStorageLinkedService",
"typeProperties": {
"fileName": "dummyoutput.txt",
"folderPath": "adf/output",
"format": {
"type": "TextFormat",
"columnDelimiter": "\t"
}
},
"availability": {
"frequency": "Day",
"interval": 1
}
}
}

How to create a periodic copy of data from OData to Azure DocumentDB

I am trying to make a periodic copy of all the data returning from an OData query into a documentDB collection, on a daily basis.
The copy works fine using the copy wizard, which is A REALLY GREAT option for simple tasks.  Thanks for that.
What isn't working for me though:  The copy just adds data each time, and I have NO WAY that I can SEE with a documentDB sink to "pre-delete" the data in the collection (compare to the SQL sink which has sqlWriterCleanupScript, which I could set to something like Delete * from 'table').
I know I can create an Azure Batch and do what I need, but at this point, I'm not sure that it isn't better to do a function and forego the Azure Data Factory (ADF) for this move.  I'm using ADF for replicating on-prem SQL stuff just fine, because it has the writer cleanup script.
At this point, I'd like to just use DocumentDB but I don't see a way to do it given the way my data works.
Here's a look at my pipeline:
{
"name": "R-------ProjectToDocDB",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": " "
},
"sink": {
"type": "DocumentDbCollectionSink",
"nestingSeparator": ".",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
/// this is where a cleanup script would be great.
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "ProjectId:ProjectId,.....:CostClassification"
}
},
"inputs": [
{
"name": "InputDataset-shc"
}
],
"outputs": [
{
"name": "OutputDataset-shc"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "Activity-0-_Custom query_->---Project"
}
],
"start": "2017-04-26T20:13:27.683Z",
"end": "2099-12-31T05:00:00Z",
"isPaused": false,
"hubName": "r-----datafactory01_hub",
"pipelineMode": "Scheduled"
}
}
Perhaps there's an update in the pipeline that creates parity between SQL output and DocumentDB
Azure Data Factory did not support clean up script for DocDB today. It's something in our backlog. If you can describe a little bit more for the E2E scenario, could help us priorities. For example, why append to the same collection not work? Is that because there's no way to identify the incremental records after each run? For the clean up requirement, will that always be delete * or it might be based on time stamp, etc. Thanks. Before the support for clean up script was there, custom activity was the only way to workaround now, sorry.
You could use a Logic App that runs on a Timer Trigger.

Azure data factory specify multiple source type in activity

I have two dataset, one "FileShare" DS1 and another "BlobSource" DS2. I define a pipeline with one copy activity, which needs to copy the files from DS1 to DS3 (BlobSource), with dependency specified as DS2. The activity is specified below:
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "FileShare"
},
"sink": {
"type": "BlobSource"
}
},
"inputs": [
{
"name": "FoodGroupDescriptionsFileSystem"
},
{
"name": "FoodGroupDescriptionsInputBlob"
}
],
"outputs": [
{
"name": "FoodGroupDescriptionsAzureBlob"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst"
},
"scheduler": {
"frequency": "Minute",
"interval": 15
},
"name": "FoodGroupDescriptions",
"description": "#1 Bulk Import FoodGroupDescriptions"
}
Here, how can i specify multiple source type (both FileShare and BlobSource)? It throws error when i try to pass as list.
The copy activity doesn't like multiple inputs or outputs. It can only perform a 1 to 1 copy... It won't even change the filename for you in the output dataset, never mind merging files!
This is probably intentional so Microsoft can charge you more for additional activities. But let's not digress into that one.
I suggest having 1 pipeline copying both files into some sort of Azure storage using separate activities (1 per file). Then have a second down stream pipeline that has a custom activity to read and merge/concatenate the files to produce a single output.
Remember that ADF isn't an ETL tool like SSIS. Its just there to invoke other Azure services. Copying is about a complex as it gets.

Resources