deleting rows in azure data flow - azure

I am trying to clean a data frame In azure data flow using alter row operation. I have created a blob link service with CSV file (5 columns). Then created a data flow as follows: Please refer to the image attached.
enter image description here
As you can see in the third image, alterrow still contains zero columns, not picking up columns from the source file. Can anyone tell me why this is happening?

As Mark Kromer refers,you can delete your AlterRow1 and add a new AlterRow.If it doesn't work,try doing this in a different browser or clear your browser cache.It looks the same as this question.

Related

Add file name to Copy activity in Azure Data Factory

I want to copy data from a CSV file (Source) on Blob storage to Azure SQL Database table (Sink) via regular Copy activity but I want to copy also file name alongside every entry into the table. I am new to ADF so the solution is probably easy but I have not been able to find the answer in the documentation and neither on the internet so far.
My mapping currently looks like this (I have created a table for output with the file name column but this data is not explicitly defined at the column level at the CSV file therefore I need to extract it from the metadata and pair it to the column):
For the first time, I thought that I am going to put dynamic content in there and therefore solve the problem this way. But there is not an option to use dynamic content in each individual box so I do not know how to implement the solution. My next thought was to use Pre-copy script but have not seen how could I use it for this purpose. What is the best way to solve this issue?
In Mapping columns of copy activity you cannot add the dynamic content of Meta data.
First give the source csv dataset to the Get Metadata activity then join it with copy activity like below.
You can add the file name column by the Additional columns in the copy activity source itself by giving the dynamic content of the Get Meta data Actvity after giving same source csv dataset.
#activity('Get Metadata1').output.itemName
If you are sure about the data types of your data then no need to go to the mapping, you can execute your pipeline.
Here I am copying the contents of samplecsv.csv file to SQL table named output.
My output for your reference:

How to parse each row of an excel using Azure Data Factory

here is my requirement:
I have an excel with few columns in it and few rows with data
I have uploaded this excel in Azure blob storage
Using ADF I need to read this excel and parse the records in it one by one and perform an action of creating dynamic folders in Azure blob.
This needs to be done for each and every record present in the excel.
Each record in the excel has some information that is going to help me create the folders dynamically.
Could someone help me in choosing the right set of activities or data flow in ADF to do this work?
Thanks in advance!
This is my Excel file as a Source.
I have created folders in Blob storage based on Country column.
I have selected DataFlow activity.
As shown in below screenshot, Go to Optimize tab of Sink configuration.
Now select Partition option as Set Partition.
Partition type as Key.
And Unique value per partition as Country column.
Now run Pipeline.
Expected Output:-
Inside these folders you will get files with corresponding data.

How to upload the Images in one drive (excel) to Power apps?

I am trying to create an app with the list of items and their images. I have saved the Images in a folder in one drive and created and excel in one drive with four columns (product id, product name, dimensions, and Image path) , please check the below image. However, when I upload the data and gave the data source it is not displaying the image in the layout. Is it because the path format is not correct? Please check the below image and let me know where I am doing wrong. I need the Image to be displayed, I have checked many forums but could not find the solution.
Please check the below powerapp screenshot as you can see I have given the correct data souce
Per my test, when we connect the data source with an excel table in onedrive,make sure the Image value is map to the related excel table column as shown below:
More information for your reference:
https://learn.microsoft.com/en-us/powerapps/maker/canvas-apps/add-images-pictures-audio-video#add-images-from-the-cloud-to-your-app
----------------------------Update:-------------------------------------
To avoid error, you could copy the direct link of the image in details panel:

Issue with CSV as a source in datafactory

I have a CSV
"Heading","Heading","Heading",LF
"Data1","Data2","Data3",LF
"Data4","Data5","Data6",LF
And for the above CSV row limiter is LF
Issue is last comma. When I try to preview data after setting first column as heading and skip rows as 0 in source of copy activity in data factory, it throws error stating last column is null.
If I remove last comma.ie
"Heading","Heading","Heading"LF
"Data1","Data2","Data3"LF
"Data4","Data5","Data6"LF
It will work fine.
It's not possible to edit CSV as each CSV may contain 500k records.
How to solve this?
Addition details:
CSV i am uploadingenter image description here
My azure portal setting
enter image description here
Error message on preview data
enter image description here
if i remove the first row as header i could see an empty column
enter image description here
Please try to set Row delimiter as Line Feed(\n).
I tested your sample csv file and it works fine.
output:
I tried to create the same file with you and reproduce your issue.It seems the check mechanism of adf. You need to remove the first row as header selection to escape this check. If you do not want to do that, you have to preprocess your CSV files.
I suggest you below two workarounds.
1.Use Azure Function Http Trigger. You could pass the CSV file name as parameter into Azure Function.Then use Azure Blob Storage SDK to process your csv file to cut the last comma.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
2.Use Azure Stream Analytics. You could configure your blob storage as input and create another container as output. Then use SQL query to process your CSV data.
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-quick-create-portal

Azure Data Pipeline Copy Activity loses column names when copying from SAP Hana to Data Lake Store

I am trying to copy data from SAP Hana to Azure Data Lake Store (DLS) using a Copy Activity in a Data Pipeline via Azure Data Factory.
Our copy activity runs fine and we can see that rows made it from Hana to the DLS, but they don't appear to have column names (instead they are just given 0-indexed numbers).
This link says “For structured data sources, specify the structure section only if you want map source columns to sink columns, and their names are not the same.”
We are fine using the original column names from the SAP Hana table, so it seems like we shouldn't need to specify the structure section in our dataset. However, even when we do, we still just see numbers for column names.
We have also seen the translator property at this link, but are not sure if that is the route we need to go.
Can anyone tell me why we aren't seeing the original column names copied into DLS and how we can change that? Thank you!
UPDATE
Setting the firstRowAsHeader property of the format section on our dataset to true basically solved the problem. The console still shows the numerical indices, but now includes the headers we are after as the first row. Upon downloading and opening the file, we can see the numbers are not there (the console just shows them for whatever reason), and it is a standard comma-delimeted file with a header row and one row entry per line.
Example:
COLUMNA,COLUMNB
aVal1,bVal1
aVal2,bVal2
We can now tell our sources and sinks to write and expect this format when reading.
BONUS UPDATE:
To get rid of the numerical indices and see the proper column headers in the console, click Format in the top-left corner, and then check the "First row is a header" box toward the bottom of the resulting blade
See the update above.
The format.firstRowAsHeader property needed to be set to true

Resources