Azure stream analytics output headers case sensitivity? - azure

I have query in my job
SELECT
[SomeChar]
,[SomeInt]
INTO [OutCsv]
FROM [InCsv]
The output goes to a Azure blob storage. My CSV output displays the column header as
somechar, someint
But I need,
SomeChar, SomeInt
Which is what is in input csv as well.
I even tried outputting it to JSON event serialization format but it doesn't preserve the case sensitivity of the field names. I want to preserve the case sensitivity throughout my application. Is there a way in Azure stream analytics to force this?

This is a know limitation currently - ASA will lowercase column names (except SELECT * queries). ASA will be exposing configuration option to preserve case sensitivity in the future.

Related

Copy CSV File with Multiline Attribute with Azure Synapse Pipeline

I have a CSV File in the Following format which want to copy from an external share to my datalake:
Test; Text
"1"; "This is a text
which goes on on a second line
and on on a third line"
"2"; "Another Test"
I do now want to load it with a Copy Data Task in an Azure Synapse Pipeline. The result is the following:
Test; Text
"1";" \"This is a text"
"which goes on on a second line";
"and on on a third line\"";
"2";" \"Another Test\""
So, yo see, it is not handling the Multi-Line Text correct. I also do not see an option to handle multiline text within a Copy Data Task. Unfortunately i'm not able to use a DataFlow Task, because it is not allowing to run with an external Azure Runtime, which i'm forced to use, due to security reasons.
In fact, i'm of course not speaking about this single test file, instead i do have x thousands of files.
My settings for the CSV File look like follows:
CSV Connector Settings
Can someone tell me how to handle this kind of multiline data correctly?
Do I have any other options within Synapse (apart from the Dataflows)?
Thanks a lot for your help
Well turns out this is not possible with a CSV File.
The pragmatic solution is to use "binary" files instead, to transfer the CSV Files and only load and transform them later on with a Python Notebook in Synapse.
You can achieve this in azure data factory by iterating through all lines and check for delimiter in each line. And then, use string manipulation functions with set variable activities to convert multi-line data to a single line.
Look at the following example. I have a set variable activity with empty value (taken from parameter) for req variable.
In lookup, create a dataset with following configuration to the multiline csv:
In foreach, where I iterate each row by giving items value as #range(0,sub(activity('Lookup1').output.count,1)). Inside for each, I have an if activity with following condition:
#contains(activity('Lookup1').output.value[item()]['Prop_0'],';')
If this is true, then I concat the current result to req variable using 2 set variable activities.
temp: #if(contains(activity('Lookup1').output.value[add(item(),1)]['Prop_0'],';'),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],decodeUriComponent('%0D%0A')),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' '))
actual (req variable): #variables('val')
For false, I have handled the concatenation in the following way:
temp1: #concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' ')
actual1 (req variable): #variables('val2')
Now, I have used a final variable to handle last line of the file. I have used the following dynamic content for that:
#if(contains(last(activity('Lookup1').output.value)['Prop_0'],';'),concat(variables('req'),decodeUriComponent('%0D%0A'),last(activity('Lookup1').output.value)['Prop_0']),concat(variables('req'),last(activity('Lookup1').output.value)['Prop_0']))
Finally, I have taken copy data activity with a sample source file with 1 column and 1 row (using this to copy our actual data).
Now, take source file configuration as shown below:
Create an additional column with value as final variable value:
Create a sink with following configuration and select mapping for only above created column:
When I run the pipeline, I get the data as required. The following is an output image for reference.

Azure Data Factory to create an empty csv file

I have a requirement where I want to check if file exist in a folder. If file exists then I want to skip else I want to create an empty csv file.
One way I have tried using Metadata activity and then using copy activity I am able to move empty file to the destination.
I want to check if there is any better way of creating empty csv file with on the fly without using copy activity?
Yes, there is another approach but it includes coding.
You can create a Azure Function in your preferred coding language and trigger it using Azure Data Factory Azure Function activity.
The Azure Function activity allows you to run Azure Functions in an Azure Data Factory or Synapse pipeline. To run an Azure Function, you must create a linked service connection. Then you can use the linked service with an activity that specifies the Azure Function that you plan to execute.
Learn more here.
In Azure Function, you can access the directory where you want to check the files availability and can also create/delete/update the csv files with schema based.
You can use a Copy Activity. Source: Write a SQL Query in-line that includes all the columns as NULL and a WHERE clause that will produce no data. Note that no tables are involved, and you can issue the query against any SQL source, including Serverless. Target: the csv file, include headers. Use an IF activity to create the file, if the metadata activity shows the file is absent.
Example query--
SELECT
CAST( NULL AS DATE) AS CreatedDate
, CAST( NULL AS INT) AS EmployeeId
, CAST( NULL AS VARCHAR(20)) AS FirstName
, CAST( NULL AS VARCHAR(50)) AS LastName
, CAST( NULL AS CHAR(20)) AS PostalCode
WHERE
1=2;

Azure Data Factory removing spaces from column names of csv file

I'm a bit new to azure data factory so apologies if I'm missing anything obvious. I've done several searches and I can't find anything that quite fits.
So the situation is that we have an existing pipeline that will take the path to a csv file and pass this in as a delimited data set. As a sink it is using a parquet data set. This is a generic process that we can pass any delimited file into and it will output it as parquet.
This has been working well but now we have started receiving files with spaces and special characters in the header which causes the output to parquet to fail. Unfortunately we don't have control over the format of the files we receive so I can't handle this at source.
What I would like to do is on ingestion of the file replace any spaces and other special characters in the header with an underscore. If I were doing this on premise I could quickly create a powershell script to do it. I had thought about creating a custom task in AFD to call a powershell script to do this in the blob storage but that seems more complicated than it should be. Is there something else I can do to get this process working while keeping it generic?
As #Joel Cochran mentioned, you can use the below expression in Select transformation to replace space and special characters in the header.
regexReplace($$,'[^a-zA-Z]','_')
Source:
In Select transformation, remove the auto mappings and add new rule base mapping to use this expression.
preview:
You can change the output filename not directly in the Copy activity, assuming you are using this activity.
The workaround is to use a parameter for the filename output that you can cleanup.
You can use the Get Metadata activity to get all filenames from the source csv files.
Then loop over these files with a foreach activity.
Within the foreach activity you can set the output filename with the new name with the cleaned value.
The function could look like this:
#replace(item().name, ' ', '_')
More information on the replace function

Azure Data Factory V2 Copy Activity file filters

I am using Data Factory v2 and I currently have a simple copy activity which copies files from an FTP server to blob storage. The file names on this server are of the following form :
File_{Year}{Month}{Day}.zip
In order to download the most recent file I add this filter to my input dataset json file :
"fileName": {
"value": "#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.zip')",
"type": "Expression"
}
I now want to be able to download yesterday's file which is possible using adddays().
However I would like to be able to do this in the same copy activity and it seems that Data Factory v2 does not allow me to use the following kind of regular expression logic :
#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.zip') || #concat('File_', formatDateTime(adddays(utcnow(), -1), 'yyyyMMdd'), '.zip')
Is this possible or do I need a separate activity ?
It would seem strange to need a second activity since a Copy Activity can only take a single input but if the regex is simple enough, then multiple files are treated as a single input and if not then multiple files are treated as multiple inputs.
The '||' won't work since it will be evaluate as a single string.
But I can provided two solutions for this.
using a tumbling window daily trigger and set the start time as yesterday. So it will trigger two pipeline run.
using Foreach activity + copy activity. The foreach activity iterate an array to pass the yesterday and today to the copy activity.
Btw, you could just use string interpolation expression instead of concat. They are the same.
File_#{formatDateTime(utcnow(), 'yyyyMMdd')}.zip
I would suggest you to read about get metadata activity. This can be helpful in your scenario I think.
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-get-metadata-activity
You have itemName property, lastModified property, check it out.

Stream Analytics -not seeing the output

I use Stream Analytic to save the EventHub data into an SQL DataBase.
Even though I can see that I have both input and output requests, when I write a query to see the data from the output table I can see just 200 empty rows!! So I send data to this table, but are just NULL values
I think the problem may bethe query between input and output because my output table is empty :(. This is how I wrote it:
SELECT id,sensor,val FROM EventHubInput
Could there be another problem?
I have to mention that my EventHub is the link between a Meshlium and Azure.This is why I think my problem can be also from the frame I send from Meshlium.
I really don't know what to do. HELP ?!
You haven't specified any output.
SELECT id,sensor,val
OUTPUT YourSQLOutput
FROM EventHubInput
Stream Analytics queries' default output is output.
So if your SQL DB alias is SQLDbOutput, it won't work. You should specify it yourself:
SELECT id,sensor,val
INTO SQLDbOutput
FROM EventHubInput
The editor in Azure should tell you the names of your inputs and outputs on the left.
Also make sure your events in Event Hub contain those properties (id, sensor, val), and that the SQL DB contains columns with the same names.

Resources