How to make the aliases with Uppercase in stream analytics? - azure

I have a simple json message that I receive from a device, this is the message
{"A":3,"B":4}
Also I set a query in the stream job to send the data to Power Bi, this is the query
SELECT * INTO [OutputBI] FROM [Input] WHERE deviceId='device1'
When I check the dataset in Power BI the name of columns were in uppercase |A|B| but when I used the alias in the query my columns were changed to lowercase |a|b|. This is the new query
SELECT v1 as A, v2 as B INTO [OutputBI] FROM [Input] WHERE deviceId='device1'
The reason why I change the query is because the variable names in the message were changed to A->v1, B->v2
My question is, Is there any way to use the alias in uppercase in the output of the job(Power BI in this case) ?
The problem is in the dataset of power BI, the first dataset recognized the column names in uppercase and when the query was changed, the column names were in lowercase, this is a trouble because of the dataset change, reports in power bi will not work, and I would have to do the reports again.

In the Configure section of the Stream Analytics job pane, selecting the Compatibility level and changing it to 1.1 should be able to solve the problem.
In this new version, case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine. However, persisting case-sensitivity isn't yet available for ASA jobs hosted by using Edge environment.

You could create a calculated column in PowerBI using the UPPER function. For example, Col2=UPPER(Column1)
You can also do this in the query editior / M Query using Text.Upper. Alternatively, I'm pretty sure there is a way to do it in the GUI.

Related

A chart with a single number using Kusto Query Language

I have a simple Kusto request, something like the following:
customMetrics
| where timestamp > ago(10m)
| where name == "Custom metric number one"
| summarize sum(value)
Obviously, the result of this query is a single number.
I would like to pin this request to a dashboard, so the tile will look like a card having a title/subtitle and the number retrieving as the result of the Kusto request. Firstly, I tried to use "render" operator, but it can draw either a chart or a simple unformatted table. I tried to use "render card", but ApplicationInsights answered that "We currently don't support 'card' visualization type."
Is there any other possibility to create the desired tile with a single number on it?
Why not just pin the table query result:
customMetrics
| where timestamp > ago(10m)
| where name == "Custom metric number one"
| summarize sum(value)
results in
It is probably the closest to a card you can do at the moment
There is another option as well, you can add a Markdown tile, it can point to a url containing Markdown content so you might be able to create something that periodically updates a certain MD file and show that on the dashboard. You can leverage the Application Insights API to get the value you want and have an azure function generate the markdown.
Another option, if you have access to Power Bi, is to create a Power Bi report that you share with external stakeholders/non developers.When going that direction you can use all the rich visuals Power Bi provides in combination with data from Application Insights, including cards. See the docs
Or use grafana? There’s a manager instance in azure albeit still in preview.

LogicApp that returns newly generated ID back to original source

Hello I am trying to create a LogicApp that first:
Extracts data from CosmosDB, using a query
Loops over the results
Pushes the results data into CRM
Sidenote: Once this data is pushed into CRM, CRM automatically generates an ID for each record. How can I then:
My biggest dilemma is figuring out how can I return the newly generated ID back to the original CosmosDB container in which it was pulled from?
I have started on this and these are my questions:
Does this first part look correct in what I am doing? I must use this SQL query for this:
a. I am simply telling the Logic App to extract data from a particular container inside CosmosDB
b. Then I would like to loop over the results I just obtained, and push this to CRM
c. My dilemma is:
Once data is pushed to CRM, CRM then automatically generates an ID for each record that is uploaded. How would I then return the updated ID to cosmos?
Should I create a variable that stores the IDS, then replace the old IDs with the new ones?
I am not sure how to construct/write this logic within LogicApps and have been researching examples of this.
Any help is greatly appreciated.
Thanks
If the call to your CRM system returns the ID you are talking about then I would just add one additional action in your loop in Azure Logic App to update the read record in Azure Cosmos DB. Given that you are doing a SELECT * from the container you should have the whole original document.
Add the 'Create or update document' action as a step with a reference to the THFeature container along with your Database ID and then provide the new values for the document. I pasted an example below.
BTW. Your select query looks strange - you should avoid slow cross partition queries if you can.

Mapping columns from JSON in an Azure SQL Data Flow task

I am attempting a simple SELECT action on a Source JSON dataset in an Azure Data Factory data flow, but I am getting an error message that none of the columns from my source are valid. I use the exact configuration as the video, except instead of a CSV file, I use a JSON file.
In the video, at 1:12, you can see that after configuring the source dataset, the source projection shows all of the columns from the source schema. Below is a screen shot from the tutorial video:
image.png
And below is a screen shot from my attempt:
(I blurred the column names because they match column names from a vendor app)
Note in my projection, I am unable to modify the data types or the format. I'm not sure why not, but I don't need to modify either so I moved on. I did try with a CSV and I was able to modify the data types. I'm assuming this is a JSON thing, but I'm noting here just in case there is some configuration that I should take a look at.
At 6:48 in the video, you'll see the user add a select task, exactly as I have done. Below is a screen shot of the select task in the tutorial immediately following adding the task:
Notice the source columns all appear. Below is a screen shot of my select task:
I'm curious why the column names are missing? If I type them in manually, I get an error: "Column not found"
For reference, below are screen shots of my Data Source setup. I'm using a Data Lake Storage Gen2 Linked Service connected via Managed Identity and the AutoResolvingIntegrationRuntime.
Note that I tried to do this with a CSV as well. I was able to edit the datatype and format on a CSV, but I get the same column not found error on the next step.
Try doing this in a different browser or clear your browser cache. It may just be a formatting thing in the auto-generated JSON. This has happened to me before.

No data is appearing in SSMS even though my job is running without errors

Problem: No data is appearing in SSMS (Sql Server Management Studio)
I don't see any errors appearing and my job diagram successfully shows a process from input to output.
I'm trying to use the continuous export feature of Azure Application Insights, Stream Analytics, and SQL Database.
Here is my query:
SELECT
A.context.data.eventTime as eventTime,
A.context.device.type as deviceType,
A.context.[user].anonId as userId,
A.context.device.roleInstance as machineName
INTO DevUserlgnsOutput -- Output Name
FROM devUserlgnsStreamInput A -- Input Name
I tested the query with sample data and the output box below the query and it returned what I expected, so I don't think the query itself is the issue.
I also know that the custom events I'm trying to display the attributes of have occurred since I began the job. My job is also still running and has not stopped since its creation.
In addition, I would like to point out that the monitoring graph on the stream analytics page detects 0 inputs, 0 outputs, and 0 runtime errors.
Thank you in advance for the help!
Below are some pictures that might help:
Stream Analytics Output Details
The Empty SSMS after I clicked "display top 1000 rows," which should be filled with data
No input events, output events, or runtime errors for the stream analytics job
I've had this issue twice with 2 separate application insights, containers, jobs, etc. Both times I solved this by editing the path pattern of my input(s) to my job.
To navigate to the necessary blade to make the following changes:
1) Click on your stream analytics job
2) Click "inputs" under the "job topology" section of the blade
3) Click your input (if multiple inputs, do this to 1 at a time)
4) Use the blade that pops up on the right side of the screen
The 4 potential solutions I've come across are ( A-D in bold):
A. Making sure the path pattern you enter is plain text with no hidden characters (sometimes copying it from the container on Azure made it not plain text).
*Steps:*
1) Cut the path pattern you have already in the input blade
2) Paste it into Notepad and re-copy it
3) Re-paste it into the path pattern slot of your input
B. Append your path pattern with /{date}/{time}
Simply type this at the end of your path pattern in the blade's textbox
C. Remove the container name and the "/" that immediately follows it from the beginning of your path pattern (see picture below)
Edit path pattern
Should be self-explanatory after seeing the pic.
D. Changing the date format of your input to YYYY-MM-DD in the drop-down box.
Should also be self-explanatory (look at the above picture if not).
Hope this helps!!

azure stream analytics not showing in powerBI

I am using PowerBI to visualise stream analytics, however after adding a new output in azure and starting the job, it still does not appear as a dataset in powerBi.
What do I need to do to ensure it shows up?
This was caused by there being no output from the query.
When running the query using the test button, 0 rows are returned.
Solution was to modify the query so that it returns data.
SELECT persn_id,persn_name,Date,count(persn_id) Countperson INTO personBI(theoutputnameforpowerBi)
FROM personEventHubInput(theinputnamefromInputs)
Group by persn_id,persn_name,Date,tumblingwindow(ss,2)

Resources