Exception Handling for Copy Activity in Azure Data Factory - azure

I was using the copy activity for updating rows to azure table storage. Currently the pipeline fails if there are any errors in updating any of the rows/batches.
Is there a way to gracefully handle the failed rows and continue with the copy activity for the rest of the data?
I already tried the Fault Tolerance option which the copy activity provides but that does not solve this case.
FaultTolerance Page

I have repro'd the same and got the same error when mapping the column containing special character data to RowKey column in Table Storage.
Source dataset
Fault tolerance settings
Error Message:
In copy activity, it is not possible to skip the incompatible rows other than using fault tolerance. Workaround is to use dataflow activity and separate the compatible rows and incompatible rows and then copy the compatible data using copy activity. Below is the approach.
Source is taken as in below image.
Since col4 needs to be checked before loading to Table storage, Condition is given on col4 data using condition activity. Conditional split Transformation is added after source transformation. Condition is given as,
FalseStream :
like(col4,'%#%')||like(col4,'%$%')||like(col4,'%/%')||like(col4,'%\\%')
**Sample characters are given in the above condition. **
True Stream will be the rows which do not match the above condition.
Both False and true Streams are added to Sink1 and sink2 respectively to copy the data to blob storage.
Output:
False Stream:
True Stream Data:
Once compatible data are copied to Blob, they can be copied to Table Storage using copy activity.

Related

Azure Data Factory Flow : Data Flow Sink Mapping

In : Azure Data Factory Flow : Data Flow Sink Mapping
on the mapping tab of the Sink:
"At least one incoming column is mapped to a column in the sink dataset schema with a conflicting type, which can cause NULL values or runtime errors."
Is there some way of actually finding out what the offending column is?
Visual inspection confirms both outgoing transformation and snowflake table types are identical.
If you are loading from ADF into Snowflake, I would recommend you load all columns of data into a single "variant" column in Snowflake. Then, once the data is safe and sound, you parse your variant column into multiple columns (complete with Data Quality checks and transformations)

Skip null rows while reading Azure Data Factory

I am new to Azure Data Factory, and I currently have the following setup for a pipeline.
Azure Data Factory Pipeline
Inside the for each
The pipeline does the following:
Reads files for a directory everyday
Filters the children in the directory based on file type [only selects TSV files]
Iterates over each file and copies the data to Azure Data Explorer if they have the correct schema, which I have defined in mapping for the copy activity.
It copied files are then moved to a different directory and deleted from the original directory so that they aren't copied again.
[Question]: I want to delete or skip the rows which have null value in any one of the attributes.
I was looking into using data flow, but I am not sure how to use data flows to read multiple tsv files and validate their schema before applying transformations to delete the null records.
Please let me know if there is a solution where I can skip the null values in the for each loop or if I can use data flow to do the same.
If I can use data flow, how do I read multiple files and validate their column names (schema) before applying row transformations?
Any suggestions that would help me delete or skip those null values will be hugely helpful
Thanks!
Ok, inside the ForEach activity, you only need to add a dataflow activity.
The main idea is to do the filter/assert activity then you write to multiple sinks.
ADF dataflow :
Source:
add your tsv file as requested, and make sure to select in After completion ->Delete source files this will save you from adding a delete activity.
Filter activity:
Now, depends on your use case, do you want to filter rows with null values? or do you want to validate that you don't have null values.
if you want to filter, just add a filter activity, in filter settings -> filter on -> 'here add your condition'.
if you need to validate rows and make the dataflow fail, use the assert activity
filter condition : false(isNull(columnName))
Sink:
i added 2 sinks,one for ADE and one for new directory.
You can read more about it here:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-assert
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-filter
https://microsoft-bitools.blogspot.com/2019/05/azure-incremental-load-using-adf-data.html
please consider the incremental load and change the dataflow accordingly.

Cannot copy data from azure data lake gen2 to azure sql db using data factory pipeline

Failure happened on 'Sink' side. ErrorCode=UserErrorInvalidColumnName,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The column Prop_0 is not found in target side,Source=Microsoft.DataTransfer.ClientLibrary.
All the part files in the ADLS GEN2 have 8 columns and the sink table also has 8 columns and there is no such column called Prop_0 in part files.
Inputs are part files saved in ADLS GEN2 -
Content of one of the part file -
Mapping on ADF -
Output of sql query executed on Azure query editor -
You get this error when your source files don’t have a header (or consider first row as header when you have a header for source files) and you have not enabled the column mapping option. Prop_0, Prop_1 etc., will act as column names when the source file does not have a header (or not enabled).
In this case, when you disable the column mapping option (cleared or skipped), the copy activity will try to insert columns from source using the name, only when they match your sink (table). In the following image, I have not imported the schema (skipped) and it throws the same error when I run the pipeline.
Since your destination does not have Prop_0 column, it throws the following error:
Follow the steps specified below to rectify this error:
First identify if your source part files have header. Then edit your source dataset by checking/unchecking the first row as header option. Publish and preview this data in the source tab of your pipeline.
Move to Mapping section and click import schemas (Clear and import again if required). Make changes in the mapping if necessary, according to your requirements.
Changes have to be made in this mapping because the source columns and the destination columns don't match. From the part file sample you have given, the appropriate mapping would be as shown below:
Now run the pipeline. The pipeline will run successfully and the sql table will reflect the rows inserted.

Why won't my rows get deleted in Data Factory?

I am trying to do some data transformations on a dataset in Data Factory. I wanted to delete a set of rows based on certain conditions. This is the data flow so far:
So in AlterRow1 I deleted the rows I wanted, and this is the result when I click on data preview:
As you can see, 6 rows get deleted, exactly what I wanted. However, in sink1 this is the data preview I'm getting:
The rows I wanted to delete are back and won't get deleted when I run this pipeline. I'll add that the source is an excel file from the blob storage and sink is a csv file in my blob storage.
What am I doing wrong?
EDIT:
There are no settings in the sink to allow deletion.
Although you seem to be able to get the preview, Alter row transformation can result in a row (or rows) being inserted, updated, deleted, or upserted (DDL & DML actions) against your database only.
See, Alter row transformation in mapping data flow
I did try to repro your exact scenario and I do see the same behavior. I can see in AlterRow transformation's Data review the rows marked X to be deleted. But the sink preview doesn't show them and all the rows from source are seen.
I could not find any particular details as to this behavior, you can reach out here and here for official response.

Filter csv file according to null values using azure data factory

I am having a CSV file in blob now I wanted to push that CSV file into SQL table using azure data factory but want I want is to put a check condition on CSV data if any cell has null value so that row data will copy into an error table like for an example I have ID, name and contact column in CSV so for any record lets say contact is null(1, 'Gaurav', NULL) so in that case, this row will insert into an error table and if there is no null in the row then that row will go into the master table
Note: As the sink is SQL on a VM so we can't create any this over there we have to handle this on data factory level only
This can be done using a mapping data flow in ADF. One way of doing it is to use a derived column with an expression that that does the null check with for example the isNull() function. That way you can populated a new column with some value for the different cases, which you can then use in a conditional split to redirect the different streams to different sinks.

Resources