I start learning the Azure Logic Apps and my first tasks is to store the result of a specific Kusto query from calling the log analytics of azure https://api.loganalytics.io/v1/workspaces/{guid}/query.
Currently, I can successfully call the log analytics api using Http in Logic App and this is the sample return.
{
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "UserPrincipalName",
"type": "string"
},
{
"name": "OperationName",
"type": "string"
},
{
"name": "ResultDescription",
"type": "string"
},
{
"name": "AuthMethod",
"type": "string"
},
{
"name": "TimeGenerated",
"type": "string"
}
],
"rows": [
[
"first.name#email.com",
"Fraud reported - no action taken",
"Successfully reported fraud",
"Phone call approval (Authentication phone)",
"22-01-03 [09:01:03 AM]"
],
[
"last.name#email.com",
"Fraud reported - no action taken",
"Successfully reported fraud",
"Phone call approval (Authentication phone)",
"22-02-19 [01:28:29 AM]"
]
]
}
]
}
From this result, I'm stuck on how should iterate the rows property of the json result and save those data to Azure Table Storage which correspond to the columns property in the json result.
E.g.,
| UserPrincipalName | OperationName | ResultDescription | AuthMethod | TimeGenerated |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
| first.name#email.com | Fraud reported - no action taken | Successfully reported fraud | Phone call approval (Authentication phone) | 22-01-03 [09:01:03 AM] |
| last.name#email.com | Fraud reported - no action taken | Successfully reported fraud | Phone call approval (Authentication phone) | 22-02-19 [01:28:29 AM] |
Hope someone can guide me on how to achieve this.
TIA!
You can use Parse_JSON to extract the inner data of the output provided and then can use Insert or Replace Entity action inside a for_each loop. Here is the screenshot of my logic app
In my storage account
UPDATED ANSWER
Instead of directly using Insert or Replace Entity I have Initialised 2 variables and then used Insert or Merge Entity. One variable is to iterate inside rows and other is to iterate inside columns using until loop and fetched the required values from tables. Here is the screenshot of my logic app.
In the first until loop, The iteration continues until rows variable is equal to no of rows. Below is the expression:
length(body('Parse_JSON')?['tables']?[0]?['rows'])
In the second until loop, The iteration continues until columns variable is equal to no of columns. Below is the expression:
length(body('Parse_JSON')?['tables']?[0]?['rows'])
Below is the expression I'm using in Insert or Merge Entity's entity
{
"#{body('Parse_JSON')?['tables']?[0]?['columns']?[variables('columns')]?['name']}": "#{body('Parse_JSON')?['tables']?[0]?['rows']?[variables('rows')]?[variables('columns')]}"
}
RESULTS:
Firstly, use a Parse JSON action and load your JSON in as sample to generate the schema.
Then use a For each (rename them accordingly) to traverse the rows, this will then automatically generate an outer For each for the tables.
This is the trickier part, you need to generate a payload that contains your data with some specific keys that you can then identify in your storage table.
This is my test flow with your data ...
... and this is the end result in storage explorer ...
The JSON within the entity field in the Insert Entity action looks like this ...
{
"Data": "#{items('For_Each_Row')}",
"PartitionKey": "#{guid()}",
"RowKey": "#{guid()}"
}
I simply used GUID's to make it work but you'd want to come up with some kind of key from your data to make it much more rational. Maybe the date field or something.
Related
So, I have a logic app that looks like below
enter image description here
The main idea of the app is to get the list items of a list and copy the contents in a csv file in blob storage.
The site name and list name are passed through the HTTP request body.
However, I would like to also define the Select operation column mapping dynamically.
The body looks like this
{
"listName" : "The list name",
"siteAddress" : "SharepointSiteAddress",
"columns" : {
"Email": " #item()?['Employee']?['Email']",
"Region": " #item()?['Region']?['Value']"
}
}
In the 'Map' section of the 'Select' Operation I use the 'columns' property as shown below
enter image description here
However, in the output stream of the 'Select' Operation, email and region column values are resolved with the string that is passed instead of retrieving the actual item value that I am trying to refer to.
Can I somehow create the csv table dynamically through the HTTP request while also being able to access the items' values?
Using expressions, you can create CSV file with Dynamic data. I have reproduced issue from my side and below are the stepts I followed.
Created Logic app as shown below,
In Http trigger, I have defined sample payload as shown below,
{
"listName" : "The list name",
"siteAddress" : "SharepointSiteAddress",
"columns" : {
"Email": " Email",
"DisplayName": "DisplayName"
}
}
In select action, from is taken from Get items value. In Map row, key is taken from Http trigger and value is from SharePoint item as shown below,
Map:
Key -triggerBody()?['columns']?['Email']
Value - item()?['Editor']?['Email']
Output of Get Items action in my case is like below. Hence written expression according to that.
"value": [
{
"#odata.etag": "\"1\"",
"ItemInternalId": "3",
"ID": 3,
"Modified": "2022-11-15T10:49:47Z",
"Editor": {
"#odata.type": "#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser",
"Claims": "i:0#.f|membership|xyzt.com",
"DisplayName": "Test",
"Email": "v#mail.com",
"JobTitle": ""
}
Tested logic app. It ran successfully and csv file is generated like below,
Csv file:
I've introduced a new DAC and a new field on the Sales Order associated to the new DAC key. When trying to retrieve the information through OpenAPI it comes back empty. I'd like to know why and how I can adjust my code to return the information. I've tried both PXSelect and PXSelectReadOnly
Declaring Statement on SOOrderEntry(extension):
public PXSelect<IOCSCompanyBrand, Where<IOCSCompanyBrand.companyBrandNbr,
Equal<Current<SOOrderExt.usrCompanyBrand>>>> CompanyBranding;
When I hit the URL: http://localhost/Acumatica21/entity/AcumaticaExtended21R1/20.200.001/SalesOrder?$select=OrderNbr,CompanyBranding,OrderType,CompanyBrand&$expand=CompanyBranding&$filter=OrderNbr%20eq%20'SO-030003'
This is the data that is returned:
[
{
"id": "f827cb43-9b8a-ec11-a481-747827c044c8",
"rowNumber": 1,
"note": {
"value": ""
},
"CompanyBrand": {
"value": "IO"
},
"CompanyBranding": null,
"OrderNbr": {
"value": "SO-030003"
},
"OrderType": {
"value": "SO"
},
"custom": {}
}
]
Acumatica Version: 21.205.0063
Here's the definition for SalesOrder in the endpoint (which was populated via the GUI)
I ended up opening a Acumatica Developer case and here is their response.
Hi Kyle,
Thanks for your time yesterday.
Endpoint mapping is valid but the fields in the view isn't fetch due the limitation of the filter parameter in the request. Filter parameter optimizes the data that are being fetch. In the case, the customer view isn't fetched. To workaround the limitation, instead of using filter, you can try something like below,
http://localhost/Acumatica/entity/AcumaticaExtended21R1/20.200.001/SalesOrder/SO/SO-030007?$expand=CompanyBranding
You can fetch the record and use expand to get the detail level fields. This way you can avoid the limitation of the filter.
Please check and let me know if you have any questions.
Regards,
Vignesh
Two Requirements are needed:
Get item path of the document in a BIM360 document management.
Get all custom attributes for that item.
For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived.
Is there a way to get both the requirements in a single api call instead of using two.
In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only.
Please suggest.
Batch-Get Custom Attributes is for the additional attributes of Document Management specific. While path in project is a general information with Data Management.
The Data Management API provides some endpoints in a format of command, which can ask the backend to process the data for bunch of items.
https://forge.autodesk.com/en/docs/data/v2/reference/http/ListItems/
This command will retrieve metadata for up to 50 specified items one time. It also supports the flag includePathInProject, but the usage is tricky and API document does not indicate it. In the response, it will tell the pathInProject of these items. It may save more time than iteration.
{
"jsonapi": {
"version": "1.0"
},
"data": {
"type": "commands",
"attributes": {
"extension": {
"type": "commands:autodesk.core:ListItems",
"version": "1.0.0" ,
"data":{
"includePathInProject":true
}
}
},
"relationships": {
"resources": {
"data": [
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:vkLfPabPTealtEYoXU6m7w"
},
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:bcg7gqZ6RfG4BoipBe3VEQ"
}
]
}
}
}
}
Get item path of the document in a BIM360 document management.
Is this question about getting the hiarchy of the item? e.g. rootfolder>>subfolder>>item ? With the endpoint, by specifying the query param includePathInProject=true, it will return the relative path of the item (pathInProject) in the folder structure.
https://forge.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-items-item_id-GET/
"data": {
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:xxx",
"attributes": {
"displayName": "my-issue-att.png",
"createTime": "2021-03-12T04:51:01.0000000Z",
"createUserId": "xxx",
"createUserName": "Xiaodong Liang",
"lastModifiedTime": "2021-03-12T04:51:02.0000000Z",
"lastModifiedUserId": "200902260532621",
"lastModifiedUserName": "Xiaodong Liang",
"hidden": false,
"reserved": false,
"extension": {
"type": "items:autodesk.bim360:File",
"version": "1.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/items:autodesk.bim360:File-1.0"
},
"data": {
"sourceFileName": "my-issue-att.png"
}
},
"pathInProject": "/Project Files"
}
or if you may iterate by the data of parent
"parent": {
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.sdfedf8wef"
},
"links": {
"related": {
"href": "https://developer.api.autodesk.com/data/v1/projects/b.project.id.xyz/items/urn:adsk.wipprod:dm.lineage:hC6k4hndRWaeIVhIjvHu8w/parent"
}
}
},
Get all custom attributes for that item. For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived. Is there a way to get both the requirements in a single api call instead of using two. In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only. Please suggest.*
Let me try to understand the question better. Firstly, two things: Custom Attributes Definitions, and Custom Attributes Values(with the documents). Could you clarify what are they with 19000+ records?
If Custom Attributes Definitions, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-custom-attribute-definitions-GET/
It supports to set limit of each call. i.e. the max limit of one call is 200, which means you can fetch 19000+ records by 95 times, while each time calling should be quick (with my experience < 10 seconds). Totally around 15 minutes, instead of more than 1 hour..
Or at your side, each call with 200 records will take much time?
If Custom Attributes Values, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-versionsbatch-get-POST/
as you know, 50 records each time. And it seems it is pretty quick at your side with 5 minutes only if fetch the values of 19000+ records?
I'm trying to archive old data from CosmosDB into Azure Tables but I'm very new to Azure Data Factory and I'm not sure what would be a good approach to do this. At first, I thought that this could be done with a Copy Activity but because the properties from my documents stored in the CosmosDB source vary, I'm getting mapping issues. Any idea on what would be a good approach to tackle this archiving process?
Basically, the way I want to store the data is to copy the document root properties as they are, and store the nested JSON as a serialized string.
For example, if I wanted to archive these 2 documents :
[
{
"identifier": "1st Guid here",
"Contact": {
"Name": "John Doe",
"Age": 99
}
},
{
"identifier": "2nd Guid here",
"Distributor": {
"Name": "Jane Doe",
"Phone": {
"Number": "12345",
"IsVerified": true
}
}
}
]
I'd like these documents to be stored in Azure Table like this:
identifier | Contact | Distributor
"Ist Guid here" | "{ \"Name\": \"John Doe\", \"Age\": 99 }" | null
"2nd Guid here" | null | "{\"Name\":\"Jane Doe\",\"Phone\":{\"Number\":\"12345\",\"IsVerified\":true}}"
Is this possible with the Copy Activity?
I tried using the mapping tab inside the CopyActivity, but when I try to run it I get an error saying that the dataType for one of the Nested JSON columns that are not present in the first row cannot be inferred.
Please follow my configuration in Mapping Tag.
Test output with your sample data:
I'm using 'GetMetadata' activity in my pipelines to get all the folders and child items and item types. But this activity is giving output in the JSON format which i'm unable to store the values to a variable so that i can iterate thru them. I need to store the Folders metadata in a sql table
Get Metadata activity sample output is like below.
{
"itemName": "ParentFolder",
"itemType": "Folder",
"childItems": [
{
"name": "ChildFolder1",
"type": "Folder"
},
{
"name": "ChildFolder2",
"type": "Folder"
},
{
"name": "ChildFolder3",
"type": "Folder"
}
],
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (North Europe)",
"executionDuration": 187
}
Can some help me to store the above json output of 'Get MetaData' Activity into a sql table like below.
The easiest way to do this is pass the Get MetaData output as a string to a stored proc and parse it in your sql db using OPENJSON.
This is how to convert the output to a string.
#string(activity('Get Metadata').output)
Now you just pass that to a stored proc and then use OPENJSON to parse it.
I have seen many others doing this using ADF foreach, however if you have 1000s of files/folders you will end up paying a lot for this method overtime. (each loop counts as an activity)