How to show data properly in Office Excel Using Power Query Editor? - excel

I have below JSON output from an API, in Office Excel I am importing data via Web from API.
[{
"level": 1,
"children": [{
"level": 2,
"children": [{
"level": 3,
"name": "Chandni Chowk",
"data": ["Data 1", "Data 2"]
}],
"name": "Delhi",
"data": ["Delhi Area"]
}],
"name": "Country",
"data": ["India", "Bangladesh"]
}]
https://learn.microsoft.com/en-us/powerquery-m/quick-tour-of-the-power-query-m-formula-language
I have above document.
let
Source = Json.Document(Web.Contents("MY API URL GOES HERE")),
AsTable = Table.FromRecords(Source)
----
----
in
#"Renamed Column2"
In the power query editor I have this for now.
As a out put in Excel file I need like this.
Country Delhi Chandni Chowk
India Delhi Area Data 1
Bangladesh Data 2
Can I get this data from this JSON or I need to change my JSON output format which matches power query?

Power Query interprets JSON as a hierarchy of records and lists. My goal is to flatten the JSON into a record like this and then convert it into a table:
Country : {"India", "Bangladesh"}
Delhi : {"Delhi Area"}
Chandni Chowk : {"Data 1", "Data 2"}
At any particular level, we can pull the name and data value using Record.FromList:
Record.FromList({CurrentLevel[data]}, {CurrentLevel[name]})
For the first level, this is
Record.FromList({{"India","Bangladesh"}}, {"Country"})
which corresponds to the first field in the goal record.
At any level, we can navigate to the next level like this:
NextLevel = CurrentLevel[children]{0}
Using these to building blocks, we can now write a custom function Expand to flatten the record:
1 | (R as record) as record =>
2 | let
3 | ThisLevel = Record.FromList({R[data]}, {R[name]}),
4 | CombLevel = if Record.HasFields(R, {"children"})
5 | then Record.Combine({ThisLevel, #Expand(R[children]{0})})
6 | else ThisLevel
7 | in
8 | CombLevel
Line 1: The syntax for defining a function. It takes a record R and returns a record after doing some transformations.
Line 3: How to deal with the current level, as mentioned earlier.
Line 4: Check if the record has another level to expand down to.
Line 5: If it does, then Record.Combine the result of the current level with the result of the next level, where the result of the next level is calculated by navigating to the next level and recursively applying the function we're defining. With three levels this looks like:
Record.Combine({Level1, Record.Combine({Level2, Level3})})
Line 6: Recursion stops when there are no more levels to expand. No more combinations, just the last level is returned.
All that's left is to transform it into the shape we want. Here's what my query looks like using the Expand function we just defined:
let
Source = Json.Document( < JSON Source > ),
ExpandRecord = Expand(Source{0}),
ToTable = Table.FromColumns(
Record.FieldValues(ExpandRecord),
Record.FieldNames(ExpandRecord)
)
in
ToTable
This uses Record.FieldValues and Record.FieldName as arguments in Table.FromColumns.
The step after using the Expand custom function looks like this in the query editor if you select the first list cell:
The final result is what you asked for:

Related

How to Create Azure Resource Graph Explorer Scheduled Reports and Email Alerts

I have a Kusto query taken from this example that looks like this:
Resources
| where type =~ 'microsoft.compute/virtualmachines'
| extend vmPowerState = tostring(properties.extended.instanceView.powerState.code)
| summarize count() by vmPowerState
I would like to create an weekly alert that send the result through an e-mail in a CSV file.
The Logic App is organized in 5 steps:
One:
Two:
With
URL: https://management.azure.com/providers/Microsoft.ResourceGraph/resources
Body:
{
"query": "Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState"
}
Three:
Where I parse the Body and I give an extract of the JSON Schema:
{
"count": 3,
"data": [
{
"count_": 3,
"vmPowerState": "PowerState/stopped"
},
{
"count_": 29,
"vmPowerState": "PowerState/deallocated"
},
{
"count_": 118,
"vmPowerState": "PowerState/running"
}
],
"skip_token": null,
"total_records": 3
}
Here I have a few doubt because I found a guide that says that I should use array formula instead. I'm not very sure about that because I cannot see the details in the example. Anyway this is what I do:
Four:
Five:
Where I create the attachment from the CSV
The e-mail in the end arrives but the attachment is not a CSV, it's a JSON file:
What the hack am I doing wrong?
if you want to use "Create CSV table" with Columns set to "Automatic", do pass the "body" of "parse Json".
you don't need to use the array variable but whatever you use need to return an array like this:
The body of the json parser on your example has many other json nodes enveloping that. You should have the option "data" as there is an array there called "data"
if you want to cut it short, try "data"
you can change to "custom". that would allow you to remove redundant data or format data (like the "PowerState" in "PowerState/stopped"):
you can also add the .csv to the file name:
The above worked for me but it can be enhanced
The suggestoin posted by #BrunoLucasAzure really helped me understand how Logic Apps works.
However I would like to reply to my own question with the right solution: I had to paste a sample of the JSON output pressing on the button Use sample payload to generate schema.
Then follow the workflow and everything will be fine.
The next problem I need to fix is pagination but apparently there is a solution for that too: https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/logic-app-http-pagination-deeper-look-build-custom-paging/ba-p/2907605

Tabulator - Sorting groups by group calcResults

Is it possible to order results of grouped items by the calculation results for each group? It seems that when I set the initialSort (or when I don't)it sorts by the order of the items within each group, rather than by the total calculation of each group.
For example, if I have data that looks something like this:
[
{id:1, company:"company 1", quantity:"10"},
{id:2, company:"company 1", quantity:"10"},
{id:3, company:"company 1", quantity:"10"},
{id:4, company:"company 2", quantity:"20"},
{id:5, company:"company 2", quantity:"1"},
{id:6, company:"company 2", quantity:"1"},
{id:7, company:"company 3", quantity:"9"},
{id:8, company:"company 3", quantity:"9"},
{id:9, company:"company 3", quantity:"9"},
]
I would end up with groups ordered:
Company 2: 22 // Highest qty 20
company 1: 30 // Highest qty 10
company 3: 27 // Highest qty 9
What I am trying to get is:
company 1: 30
company 3: 27
Company 2: 22
I can see the calculation results, but I'm not sure how to resort the groups, assuming it's possible. If anyone can point me in the right direction I will be quite grateful.
Rows can only be sorted according to a field stored in the row data.
Rows are sorted individually and then grouped, with groups appearing in the order of the sorted data.
In order to sort your table in this way, you would need to analyse your row data first before ingesting it into the table, and then set a field on each row with the desired order for the group that contains it.
The other option is to manually specify the groups you want using the groupValues option, with this approach you specify the exact groups you want and the order they should appear:
groupValues:[["male", "female", "smizmar"]]
Thanks Oli for pointing me in the direction of 'groupValues'. I ended up writing a function that uses the calcResults to get the sort order I want and then push them into groupValues to get the ordering I'm looking for.
const sortByGroupCalc = ( thisTable, thisSortField, thisSortDirection ) => {
// Temp arrays
let tempCalcArray = [];
let tempSortArray = [];
// Get calculation results
const calcResults = thisTable.getCalcResults();
// Populate Array with key:value pairs from the caculation results
for( const [key, value] of Object.entries(calcResults)) {
tempCalcArray.push([key, value.top[thisSortField]])
};
// Sort the object by value and direction and then create an array of the keys
tempCalcArray
.sort(function(a, b) {
if( thisSortDirection === 'asc' ) {
return a[1] - b[1];
} else {
return b[1] - a[1];
}})
.map(
x => tempSortArray.push(x[0])
);
// Set group order according to sort order
thisTable.setGroupValues([tempSortArray]);
}
I'm calling it in the dataSorted function. It's not perfect for all occasions, I imagine. But it seems to do what I need.
Thanks again.

Azure Application Insights | KQL | customDimensions column containing array of objects

We are using Azure application Insights for error logging. I am new to KQL and trying to fetch custom properties from inbuilt "customDimensions" column in the following format,
Value as is from "customDimensions" column
exceptions
| project customDimensions
{
"File Name":"Sample File 1",
"Correlation ID":"e33a8d45-1234-1234-1223-54a6fec30356",
"Error List":"[
{\"Function Name\":\"Sample Function 1\",\"Code\":\"12345\"},
{\"Function Name\":\"Sample-Function-2\",\"Code\":\"12343\"}]"
}
Expected Output
File Name
Correlation ID
Function Name
Code
Sample File 1
e33a8d45-1234-1234-1223-54a6fec30356
Sample Function 1
12345
Sample File 1
e33a8d45-1234-1234-1223-54a6fec30356
Sample-Function-2
12343
How can I achieve the above output using KQL?
Thank You.
This might seem a little bit tricky, but bear with me :-)
Every sub-element extracted from a dynamic element, is dynamic.
parse_json() / todynamic() when given a dynamic argument, returns it, As Is.
So first, we use tostring() and only then we use todynamic() so the string would be parsed as json, to dynamic type.
datatable(ErrorDetails:dynamic)
[
dynamic({
"File Name":"Sample File 1",
"Correlation ID":"e33a8d45-0566-4bf2-94f8-54a6fec29bff",
"Error List":"[{\"Function Name\":\"Sample Function 1\",\"Code\":\"12345\"},{\"Function Name\":\"Sample-Function-2\",\"Code\":\"12343\"}]"
})
]
| mv-expand EL = todynamic(tostring(ErrorDetails["Error List"]))
| project ["File Name"] = ErrorDetails["File Name"], ["Correlation ID"] = ErrorDetails["Correlation ID"], ["Function Name"] = EL["Function Name"], ["Code"] = EL["Code"]
File Name
Correlation ID
Function Name
Code
Sample File 1
e33a8d45-0566-4bf2-94f8-54a6fec29bff
Sample Function 1
12345
Sample File 1
e33a8d45-0566-4bf2-94f8-54a6fec29bff
Sample-Function-2
12343
Fiddle

Azure Stream Processing upsert to DocumentDB with array

I'm using Azure Stream Analytics to copy my Json over to DocumentDB using upsert to overwrite the document with the latest data. This is great for my base data, but I would love to be able to append the list data, as unfortunately I can only send one list item at a time.
In the example below, the document is matched on id, and all items are updated, but I would like the "myList" array to keep growing with the "myList" data from each document (with the same id). Is this possible? Is there any other way to use Stream Analytics to update this list in the document?
I'd rather steer clear of using a tumbling window if possible, but is that an option that would work?
Sample documents:
{
"id": "1234",
"otherData": "example",
"myList": [{"listitem": 1}]
}
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 2}]
}
Desired output:
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 1}, {"listitem": 2}]
}
My current query:
SELECT id, otherData, myList INTO [myoutput] FROM [myinput]
Currently arrays are not merged, this is the existing behavior of DocumentDB output from ASA, also mentioned in this article. I doubt using a tumbling window would help here.
Note that changes in the values of array properties in your JSON document result in the entire array getting overwritten, i.e. the array is not merged.
You could transform the input that is coming as an array (myList) into a dictionary using GetArrayElements function .
Your query might look something like --
SELECT i.id , i.otherData, listItemFromArray
INTO myoutput
FROM myinput i
CROSS APPLY GetArrayElements(i.myList) AS listItemFromArray
cheers!

How do I keep existing data in couchbase and only update the new data without overwriting

So, say I have created some records/documents under a bucket and the user updates only one column out of 10 in the RDBMS, so I am trying to send only that one columns data and update it in couchbase. But the problem is that couchbase is overwriting the entire record and putting NULL`s for the rest of the columns.
One approach is to copy all the data from the exisiting record after fetching it from Cbase, and then overwriting the new column while copying the data from the old one. But that doesn`t look like a optimal approach
Any suggestions?
You can use N1QL update Statments google for Couchbase N1QL
UPDATE replaces a document that already exists with updated values.
update:
UPDATE keyspace-ref [use-keys-clause] [set-clause] [unset-clause] [where-clause] [limit-clause] [returning-clause]
set-clause:
SET path = expression [update-for] [ , path = expression [update-for] ]*
update-for:
FOR variable (IN | WITHIN) path (, variable (IN | WITHIN) path)* [WHEN condition ] END
unset-clause:
UNSET path [update-for] (, path [ update-for ])*
keyspace-ref: Specifies the keyspace for which to update the document.
You can add an optional namespace-name to the keyspace-name in this way:
namespace-name:keyspace-name.
use-keys-clause:Specifies the keys of the data items to be updated. Optional. Keys can be any expression.
set-clause:Specifies the value for an attribute to be changed.
unset-clause: Removes the specified attribute from the document.
update-for: The update for clause uses the FOR statement to iterate over a nested array and SET or UNSET the given attribute for every matching element in the array.
where-clause:Specifies the condition that needs to be met for data to be updated. Optional.
limit-clause:Specifies the greatest number of objects that can be updated. This clause must have a non-negative integer as its upper bound. Optional.
returning-clause:Returns the data you updated as specified in the result_expression.
RBAC Privileges
User executing the UPDATE statement must have the Query Update privilege on the target keyspace. If the statement has any clauses that needs data read, such as SELECT clause, or RETURNING clause, then Query Select privilege is also required on the keyspaces referred in the respective clauses. For more details about user roles, see Authorization.
For example,
To execute the following statement, user must have the Query Update privilege on travel-sample.
UPDATE `travel-sample` SET foo = 5
To execute the following statement, user must have the Query Update privilege on the travel-sample and Query Select privilege on beer-sample.
UPDATE `travel-sample`
SET foo = 9
WHERE city = (SELECT raw city FROM `beer-sample` WHERE type = "brewery"
To execute the following statement, user must have the Query Update privilege on `travel-sample` and Query Select privilege on `travel-sample`.
UPDATE `travel-sample`
SET city = “San Francisco”
WHERE lower(city) = "sanfrancisco"
RETURNING *
Example
The following statement changes the "type" of the product, "odwalla-juice1" to "product-juice".
UPDATE product USE KEYS "odwalla-juice1" SET type = "product-juice" RETURNING product.type
"results": [
{
"type": "product-juice"
}
]
This statement removes the "type" attribute from the "product" keyspace for the document with the "odwalla-juice1" key.
UPDATE product USE KEYS "odwalla-juice1" UNSET type RETURNING product.*
"results": [
{
"productId": "odwalla-juice1",
"unitPrice": 5.4
}
]
This statement unsets the "gender" attribute in the "children" array for the document with the key, "dave" in the tutorial keyspace.
UPDATE tutorial t USE KEYS "dave" UNSET c.gender FOR c IN children END RETURNING t
"results": [
{
"t": {
"age": 46,
"children": [
{
"age": 17,
"fname": "Aiden"
},
{
"age": 2,
"fname": "Bill"
}
],
"email": "dave#gmail.com",
"fname": "Dave",
"hobbies": [
"golf",
"surfing"
],
"lname": "Smith",
"relation": "friend",
"title": "Mr.",
"type": "contact"
}
}
]
Starting version 4.5.1, the UPDATE statement has been improved to SET nested array elements. The FOR clause is enhanced to evaluate functions and expressions, and the new syntax supports multiple nested FOR expressions to access and update fields in nested arrays. Additional array levels are supported by chaining the FOR clauses.
Example
UPDATE default
SET i.subitems = ( ARRAY OBJECT_ADD(s, 'new', 'new_value' )
FOR s IN i.subitems END )
FOR s IN ARRAY_FLATTEN(ARRAY i.subitems
FOR i IN items END, 1) END;
If you're using structured (json) data, you need to read the existing record then update the field you want in your program's data structure and then send the record up again. You can't update individual fields in the json structure without sending it all up again. There isn't a way around this that I'm aware of.
It is indeed true, to update individual items in a JSON doc, you need to fetch the entire document and overwrite it.
We are working on adding individual item updates in the near future.

Resources