Altering CSV Rows in Azure Data Factory - azure

I've tried to use the 'Alter Rows' function within a Data Flow in Azure Data Factory to remove rows that match a condition from a CSV dataset.
The Data Preview shows that the rows matched will be deleted, however in the next step 'sink' it seems to ignore that and writes the original rows to the CSV file output.
Is it not possible to use alter rows on a CSV dataset and if not, is there a work around?

Firstly,use 'union' to migrate your csv files as source.
Then,use 'filter' to filter your data with date time stamps at source.

Related

Overwrite sql table with new data in Azure Dataflow

Here is my situation. Iam using Alteryx ETL tool where in basically we are appending new records to tableau by using option provided like 'Overwrite the file'.
What it does is any data incoming is captured to the target and delete the old data--> publish results in Tableau visualisation tool.
So whatever data coming in source must overwrite the existing data in Sink table.
How can we achieve this in Azure data Flow?
If you are writing to a database table, you'll see a sink setting for "truncate table" which will remove all previous rows and replace them with your new rows. Or if you are trying to overwrite just specific rows based on a key, then use an Alter Row transformation and use the "Update" option.
If your requirement is just to copy data from your source to target and truncate the table data before the latest data is copied, then you can just use a copy activity in Azure Data factory. In copy activity you have an option called Pre-copy script, in which you can specify a query to truncate the table data and then proceed with copying the latest data.
Here is an article by a community volunteer where a similar requirement has been discussed with various approaches - How to truncate table in Azure Data Factory
In case if your requirement is to do data transformation first and then copy the data to your target sql table and truncate table before your copy the latest transformed data then, you will have to use mapping data flow activity.

External Table in Databricks is showing only future date data

I had a delta table in databricks and data is available in ADLS. data is partitioned by date column, from 01-06-2022 onwards data is available in parquet format in adls but when i query the table in databricks i can see data from future date onwards everyday.older data is not displaying. every day data is overwriting to the table path with partitioned date column.
df.write.format('delta').mode('overwrite').save('{}/{}'.format(DELTALAKE_PATH, table))
Using Overwrite mode will delete past data and add new data. This is the reason for your issue.
df.write.format('delta').mode('append').save('{}/{}'.format(DELTALAKE_PATH, table))
Using append mode will append new data beneath the existing data. This will keep your existing data and when you execute a query, it will return past records as well.
You need to use append mode in place of overwrite mode.
Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only to queries where existing rows in the Result Table are not expected to change.
Reference - https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#basic-concepts

Why won't my rows get deleted in Data Factory?

I am trying to do some data transformations on a dataset in Data Factory. I wanted to delete a set of rows based on certain conditions. This is the data flow so far:
So in AlterRow1 I deleted the rows I wanted, and this is the result when I click on data preview:
As you can see, 6 rows get deleted, exactly what I wanted. However, in sink1 this is the data preview I'm getting:
The rows I wanted to delete are back and won't get deleted when I run this pipeline. I'll add that the source is an excel file from the blob storage and sink is a csv file in my blob storage.
What am I doing wrong?
EDIT:
There are no settings in the sink to allow deletion.
Although you seem to be able to get the preview, Alter row transformation can result in a row (or rows) being inserted, updated, deleted, or upserted (DDL & DML actions) against your database only.
See, Alter row transformation in mapping data flow
I did try to repro your exact scenario and I do see the same behavior. I can see in AlterRow transformation's Data review the rows marked X to be deleted. But the sink preview doesn't show them and all the rows from source are seen.
I could not find any particular details as to this behavior, you can reach out here and here for official response.

Data Factory cannot copy `csv` with comma after last column to sql data warehouse

I have CSV files that I want to copy from a blob to DW, the CSV files have comma after the last column (see example below). Using ADF, I tried to copy csv files to a SQL table in DW. However, I got this error, which I think it's because of the last comma (as I have 15 columns):
few rows of csv file:
Code,Last Trading Date,Bid Price,Bid Size,Ask Price,Ask Size,Last Price,Traded Volume,Open Price,High Price,Low Price,Settlement Price,Settlement Date,Implied Volatility,Last Trade Time,
BNH2021F,31/03/2021,37.750000,1,38.000000,1,,0,,,,37.750000,29/03/2021,,,
BNM2021F,30/06/2021,44.500000,6,44.700000,2,44.400000,4,44.300000,44.400000,44.300000,44.500000,29/03/2021,,15-55-47.000,
BNU2021F,30/09/2021,46.250000,2,47.000000,1,47.490000,2,47.490000,47.490000,47.490000,46.920000,29/03/2021,,15-59-10.000,
Note that CSVs are the original files and I can't change them. I also tried different Quote and Escape characters in the dataset and it didn't work.
Also I want to do this using ADF, not azure functions.
I couldn't find any solution to that, please help.
Update:
It's interesting that the dataset preview works:
I think you can use data flow to achieve that.
Azure data factory will interpret last comma as a column with null value. So we can use Select activity to filter last column.
Set mapping manually at sink.
Then we can sink to our DW or SQL table.
You are using 15 columns and your destination is expecting 16. Add another column to your CSV or modify your DW.
There is a simple solution to this.
Step 1:
Uncheck the "First Row as header" option in your source dataset
Step 2: Sink it first to another CSV file. in the sink csv dataset import schema like below. Copy activity will create a new CSV file with all clean 15 columns i.e. last extra comma will not be present in new csv file.
Step 3: Copy from the newly created csv file with "First row as header" checked and sick it to DW.

Filter csv file according to null values using azure data factory

I am having a CSV file in blob now I wanted to push that CSV file into SQL table using azure data factory but want I want is to put a check condition on CSV data if any cell has null value so that row data will copy into an error table like for an example I have ID, name and contact column in CSV so for any record lets say contact is null(1, 'Gaurav', NULL) so in that case, this row will insert into an error table and if there is no null in the row then that row will go into the master table
Note: As the sink is SQL on a VM so we can't create any this over there we have to handle this on data factory level only
This can be done using a mapping data flow in ADF. One way of doing it is to use a derived column with an expression that that does the null check with for example the isNull() function. That way you can populated a new column with some value for the different cases, which you can then use in a conditional split to redirect the different streams to different sinks.

Resources