How to pass a Data Flow Parameter in Key Column in Sink Tanformation while updating a data? - azure

I am implementing SCD Type2 through Data Flow. I having created a Parameter in it where I will pass a column name and this Parameter I am using in Sink Transformation in Key Column.
Passing a parameter in Key Column in Data Flow
I have selected the Add Dynamic Content and then Parameter, after that I selected the parameter I have created in Data Flow. Then it shows like "$Key_col".
But when I run the pipeline it gives me an error-
{"message":"at Sink 'sink1'(Line 56/Col 6): Column operands are not allowed in literal expressions. Details:at Sink 'sink1'(Line 56/Col 6): Column operands are not allowed in literal expressions","failureType":"UserError","target":"Update_Existing_Records","errorCode":"DFExecutorUserError"}
Can anyone please tell me how resolve this error or any workaround for this Problem.

Yes, this work. You just need to put single quotes around the parameter value like this:
"'$Key_col'"
I'm using double-quotes for string interpolation in this solution, so paste it in your expression exactly as that.

Key column doesn't support set with parameter. You only can choose the exist column in sink.
The column name that you pick as the key here will be used by ADF as part of the subsequent update, upsert, delete. Therefore, you must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key columns".
Please reference: Mapping data flow properties.
The parameter Key_col is not exist in the sink, even if it has the same name.
Update:
Data Flow parameter:
If we want to using update, we must add an Alter row active:
Sink, key column choose exist column 'name':
Pipeline runs successful:
Hope this helps.

Related

Azure Data Factory : Data Flow : Display a particular derived columns type

I am trying to determine the outgoing type of a derived column.
I am using the expression :
toBinary(sha2(256,'my data')) expecting a BINARY type, its target is a SNOWFLAKE BINARY(64) column.
I am getting a DF-Snowflake-InvalidDataT error from the pipeline run.
Is there a way of actually dumping out the derived type?
Thanks
Stephen
I am trying to determine the outgoing type of a derived column.
You can see the datatype of in the inspect tab of the derived column transformation.
Here I have added a column and used same expression.
Here is the inspect tab where you can see the datatype of all columns including the new column.
You can see the same in sink inspect also.

Azure DF Parameterizing

I am trying to parameterize a mapping data flow in Azure DF UI. I am declaring a variable in the pipeline, which is taking the result from a lookup value - the current timestamp of a table. In the mapping dataflow though I am defining a parameter, which I would like to take the value of the variable declared in the pipeline. The variable is of type string, holding the current timestamp value.
this is the pipeline in debug
the dataflow parameter is taking the value of the variable
I am setting empty string as a default value - I tried also setting it to take variable value
that is a filter expression to filter based on this timestamp
So I guess I have to add a default value to the parameter value inside the Data Flow. How should this be done using the expression language? Thank you in advance!
When you are passing the Data flow parameter value from the Pipeline, the default value is not at all required. You can leave it as it is without giving any value.
When you debug the pipeline, the value is taken from the pipeline only and not from the default value.
For the Data preview of each transformation, you can provide a temporary hardcoded value by going through the click to see parameters below. It will help us to check whether the Data flow preview is giving correct results or not.
Please go through my repro for your reference:
This is my sample data
After creating the myparam parameter in the Data flow, I have passed my variable myname to it from pipeline. Here, I have given the value of the myname variable as #string('Rakesh') to filter this name.
Don't check that checkbox.
The condition in the Filter as Name==$myparam.
Result:
The best way to add a default value you can add a previous timestamp JUST LIKE THIS to hold and btw it will not be used as its an dynamic paramter so dont worry just place it there, but it will allow the debug to preview data and example is and it run in my case as i am also getting the same lookup max date and using in an dynamic Query

ADF Mapping Data Flow - Dirty / Label Replacement / Sink cache and derived column

I am trying to build a ADF mapping data flow for generic adding Label - it's purpose is to see a value in a particular column and replace it with a label . I already have my dataset that looks like this (Table B):
enter image description here
The goal is to replace the values with the label ones. Since my label dataset mapping file is in a Cached Sink (Table B),I thought that I could use a Derived Column Activity, along with Cached Lookups to find the clean value, given the current Column Name and current value (dirty) as keys. I did a rule-based mapping expression to get just the columns that needed cleaning:
enter image description here
I tested the derived column transformation using: Each column that matches:libCached#lookup(name).Column_Name
enter image description here
This part allow me to distinct column names that need to be replaced by label and that's working fine.
I need help to make the replacement I tried several formulas it still doesn't work, I don't know if it's achievable or not ??
thanks a lot
To replace the actual values in the derived column, you'll need to use the lookup formula using the key that you've set in the cached sink so that ADF can match on that value. In the screenshot you have, it only shows that you checking for null and are not actually returning the lookup value.

Passing the Dataflow Parameter to Sink Key column in Azure Data factory

I wanted to implement SCD type 2 logic but using dynamic tables and dynamic key fields from Config Table, I have a challenge to pass the Data Flow Parameter as Sink Key Column for my Alter Row activity, it is not taking the parameter values and always gives the error as invalid key column name, I tried picking the Dataflow parameter for the expression builder at sink key column and trying to pass the value from alter row transformation and I have named the field with parameter in the select statement as well , any help or suggestion highly appreciated
Please clink below image
Sample How I wanted to Pass Dynamic Values in Sink Mapping
Trying to Give the Dynamic Value to Key Value
You have "List of columns" selected, so ADF is looking for a column in your target table that is literally called "$TargetPK1Parameter".
Change the selector to "Custom expression" and enter a string array parameter. The parameter can be an array of strings that represent names of key columns in your target table.
It should look something like this:
I encountered a similar problem when trying to pass a composite key, parameterized, as part of the update method to sink. This now allows me to fully parameterise my dataflow and it handles both composite keys and single columns keys.
Here's how the data looks in my config table:
UpsertKeyColumn = DOMNAME,DDLANGUAGE,AS4LOCAL,VALPOS,AS4VERS
A parameter value is set in the dataflow
Upsert_Key_Column = #item().UpsertKeyColumn
Finally, in the Sink settings, Custom Expression is selected for Key columns and the following expression is entered - split($upsert_key_column,',')

Azure Data Factory - Exists transformation in Data Flow with generic dataset

I'm having issues using the Exists Transformation within a Data Flow with a generic dataset.
I have two sources (one from staging table "sourceStg", one from DWH table "sourceDwh") and want to compare if the UniqueIdentifier-Column in the staging table is existing in the UniqueIdentifier-Column in the DWH table. For that I have a generic data set which I query with a SQL statement containing parameters.
When I open the "Exists settings" I cannot choose any Column from the source in the conditions since the source is generic and has no Projection until I run the data flow. However, I have a parameter which I get from the parent pipeline which provides me the name of the Column containing the UniqueIdentifier (both column names in staging / DWH are the same).
I tried to add following statement "byName($UniqueIdentifier)" in the left and right column field but the engine resolves them both as the sourceStg-Column since the prefix of the source-transformations is missing and it defaults to the first one. What I basically now try to achieve is having some statement as followed defining the correct source-transformation and the column containing the unique identifier with a parameter.
exists(sourceStg#$UniqueIdentifier == sourceDwh#$UniqueIdentifier)
But either the expression cannot be parsed or the result does not retrieve the actual UniqueIdentifier value from the column but writes the statement (e.g. sourceStg#$UniqueIdentifier) as column value.
The only workaround I found so far is having two derived columns which adds a suffix to the UniqueIdentifier-Column in one source and a new parameter $UniqueIdentiferDwh which is populate with the parameter $UniqueIdentifier and the same suffix as used in the derived column.
Any Azure Data Factory experts out there to help?
Thanks in advance!

Resources