Azure Data Factory Calculate Percent Revenue - azure

I need to calculate billing percentage with respect to the total in Azure Data Factory WorkFlow. I used the expression:
Total_amount/sum(Total_amount)
But it doesn´t work. How could I calculate percentages using aggregate transformation inside a data flow?

Create a data flow with 2 sources. They can both be the same source, in your example. The first stream will have Source->aggregate [sum(total amount)]->Sink (cached). The second stream will have Source->derived column (total amount/lookup from the cached sink above). My example screenshot below does this exact same thing with loan data. This is what my formula in the derived column looks like: loan_amnt / sink1#output().total_amount

You can try using windows transformation in data flow.
Source:
Add a dummy column using derived column transformation and assign a constant value.
Using windows transformation, get the percentage value of the total amount.
Expression in window column:
multiply(divide(toInteger(Total_amount),sum(toInteger(Total_amount))),100)
Preview of windows transformation:

Related

Dataflow expression builder greatest max integer ADF

I am using data factory's expression builder to build dataflows (Aggregrate Function) to 1. group movies by year, 2.find the max rating of movies 3. Return movie title for max.
I have already grouped by year so I'm trying to return something like
max(toInteger(Rating)) or greatest(toInteger(Rating))
and also get the 'title' of the movie that is max, can this be done in expression builder?
The Aggregate transformation defines aggregations of columns in your data streams. Using the Expression Builder, you can define different types of aggregations such as SUM, MIN, MAX, and COUNT grouped by existing or computed columns.
I tried to repro the issue with sample data and I can observe that getting the movie title isn't possible in Aggregate function in mapping data flow.
In Data preview we can see we are only getting group by column and aggregate column. There is no option to include movie name column here.

Data Flow - Window Transformation - NTILE Expression

I'm attempting to assign quartiles to a numeric source data range as it transits a data flow.
I gather that this can be accomplished by using the ntile expression within a window transform.
I'm failing in my attempt to use the documentation provided here to get any success.
This is just a basic attempt to understand the implementation before using it for real application. I have a numeric value in my source dataset, and I want the values within the range to be spread across 4 buckets and defined as such.
Thanks in advance for any assistance with this.
In Window transformation of Data Flow, we can configure the settings keeping the source data numeric column in “Sort” tab as shown below:
Next in Window columns tab, create a new column and write expression as “nTile(4)” in order to create 4 buckets:
In the Data Preview we can see that the data is spread across 4 Buckets:

Aggregation in Azure Data Flow is Returning Invalid Value

I have created a data flow in Data Factory.
Step 1. Read the parquet file.
Step 2. Aggregate the file to get the Max(DateField)
Step 3. Use a derived column to write in a Value.
Step 4. Alter row task with Value and the DateField.
Step 5. Sink select the Watermark table to update.
The flow updates the value, but it isn't putting in the max value. The date value is incorrect. Any ideas?
Flow_image
max() aggregate function doesn't work on date/string format type. You must pass any column which contains numerical values. Date is not a valid input on which you can apply max function. There is no maximum date term.
Instead you can filter the timestamp and get the latest or oldest date using ADF.
Refer this answer by #Leon to know how to implement the same.

Populate missing data points in azure dataflow

We are working on building ETL pipeline using Azure data flows.
Our requirement here is have to fill in the missing data points (add rows as required) and data for it to be copied from the previous available data point ( when sorted on key columns )
Example -
If the input data is :
The output should be like this:
The rows highlighted in green have values copied from previous available key columns ( Name, year and period )
Any idea how i can achieve the same in azure data flow.
You can use a combination of mapLoop function to generate years + quarters in 1 column. Then flatten tx it to get a table of years+quarters. Then left outer join that table to the original table.
You will have the resulting tables with nulls for the missing quarters. Then use the filldown technique to fill in values(this only works for small data)

Azure Data Flow - Can we have Dynamic columns or change in projections for Unpiovt functionality

The excel consist of 62 columns and 7 columns are fixed and rest of them have weeks as in year(week1 to week 52)
I have used a data flow task to unpivot the 53 columns into rows with 2 extra columns year and value.
The problem is that I have the 52 week column names keep changing on every week data load and how to I handle this change in column names in data flow. For a single run it gives the exact output
What you'll want to do here is to implement late-binding of your schema, or what ADF refers to as "schema drift". Instead of setting a hardened "early binding" schema in your Source projection, leave the dataset schema and projection empty.
Next, add a Derived Column after your source and call it "Projection". This is where you'll build your projection using rules to account for your evolving schema.
Build out your canonical model with the column names for your entire year using byName('columnname'). That will tell ADF to look for the existence of the column in single quotes from your source data while also providing a schema that you can use to build out your pivot table.
If you need to cast the values, wrap byName() inside of a casting function, i.e. toString(), toDate(), etc.

Resources