Dynamically define csv table structure based on HTTP request body in Logic apps - sharepoint-online

So, I have a logic app that looks like below
enter image description here
The main idea of the app is to get the list items of a list and copy the contents in a csv file in blob storage.
The site name and list name are passed through the HTTP request body.
However, I would like to also define the Select operation column mapping dynamically.
The body looks like this
{
"listName" : "The list name",
"siteAddress" : "SharepointSiteAddress",
"columns" : {
"Email": " #item()?['Employee']?['Email']",
"Region": " #item()?['Region']?['Value']"
}
}
In the 'Map' section of the 'Select' Operation I use the 'columns' property as shown below
enter image description here
However, in the output stream of the 'Select' Operation, email and region column values are resolved with the string that is passed instead of retrieving the actual item value that I am trying to refer to.
Can I somehow create the csv table dynamically through the HTTP request while also being able to access the items' values?

Using expressions, you can create CSV file with Dynamic data. I have reproduced issue from my side and below are the stepts I followed.
Created Logic app as shown below,
In Http trigger, I have defined sample payload as shown below,
{
"listName" : "The list name",
"siteAddress" : "SharepointSiteAddress",
"columns" : {
"Email": " Email",
"DisplayName": "DisplayName"
}
}
In select action, from is taken from Get items value. In Map row, key is taken from Http trigger and value is from SharePoint item as shown below,
Map:
Key -triggerBody()?['columns']?['Email']
Value - item()?['Editor']?['Email']
Output of Get Items action in my case is like below. Hence written expression according to that.
"value": [
{
"#odata.etag": "\"1\"",
"ItemInternalId": "3",
"ID": 3,
"Modified": "2022-11-15T10:49:47Z",
"Editor": {
"#odata.type": "#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser",
"Claims": "i:0#.f|membership|xyzt.com",
"DisplayName": "Test",
"Email": "v#mail.com",
"JobTitle": ""
}
Tested logic app. It ran successfully and csv file is generated like below,
Csv file:

Related

MAP Nested JSON using rest API in Azure Data Factory using for each Iterator through a stored procedure to a SQL table

I am pulling a list of pipeline using REST API in ADF. The API gives a nested JSON and I am using a for each iterator to be pulled via a stored procedure to be pulled into a SQL table.
The sample JSON looks like this:
{
"value": [
{
"id": "ADF/pipelines/v_my_data_ww",
"name": "v_my_data_ww",
"type": "MS.DF/factories/pipelines",
"properties": {
"activities": [
{
"name": "Loaddata_v_my_data_ww",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
}
]
},
"folder": {
"name": "myData"
},
"annotations": [],
"lastPublishTime": "2021-04-01T22:09:20Z"
,
"etag": "iaui7881919"
}
}
]
}
So using the For Each Iterator #activity('Web1').output.value, I was able to pull in the main keys, like id, name , type. However I also need to pull in the name/type from within the properties/activities tag. not sure how to do it. I was trying the following.
When I run the pipeline, I get the following error:
The expression 'item().properties.activities.name' cannot be evaluated because property 'name' cannot be selected. Array elements can only be selected using an integer index.
Any help in this regards will be immensely appreciated.
The error is because you are trying to access an array i.e., activities directly with a key. We need to access array elements with indices such as activities[0]
The following is a demonstration of how you can solve this. I got the same error when I used #item().properties.activities.name.
To demonstrate how to solve this, I have taken the given sample json as a parameter for pipeline and passing #pipeline().parameters.js.value in for each activity.
Now, I have used a set variable to show how I retrieved the name and type present in the activities property. The following dynamic content helps in achieving this.
#item().properties.activities[0].name
So, modify your prop_name and prop_type to the following dynamic content:
#item().properties.activities[0].name
#item().properties.activities[0].type
UPDATE:
If there are multiple objects inside activities property array, then you can follow the procedure below:
Create an execute pipeline activity to execute a pipeline called loop_activities. Create 4 parameters in loop_activities pipeline name, id, tp, activities. Pass the dynamic content as shown below:
In loop_activities pipeline, use for each to iterate through activities array. The dynamic content value for items field is #pipeline().parameters.activities.
Inside for each, you can access each required element with following dynamic content.
#for name:
#pipeline().parameters.name
#for id:
#pipeline().parameters.id
#for type:
#pipeline().parameters.type
#for prop_name:
#item().name
#for prop_type:
#item().type
The following is debug output of loop_activities (only for prop_name and prop_type) when I run pipeline1.

Convert to JSON and map to new JSON object in Alteryx

I am using Alteryx to take an Excel file and convert to JSON. The JSON output I'm getting looks different to what I was expecting and also the object starts with "JSON": which I don't want to happen and I would also like to know how/which components I would use to map fields to specific JSON fields instead of key value pairs if I need to later in the flow.
I have attached my sample workflow and excel which are:
Excel screenshot
Alteryx test flow
JSON output I am seeing:
[
{
"JSON": "{\"email\":\"test123#test.com\",\"startdate\":\"2020-12-01\",\"isEnabled\":\"0\",\"status\":\"active\"}"
},
{
"JSON": "{\"email\":\"myemail#emails.com\",\"startdate\":\"2020-12-02\",\"isEnabled\":\"1\",\"status\":\"active\"}"
}
]
What I expected:
[{
"email": "test123#test.com",
"startdate": "2020-12-01",
"isEnabled": "0",
"status": "active"
},
{
"email": "myemail#emails.com",
"startdate": "2020-12-02",
"isEnabled": "1",
"status": "active"
}
]
Also, what component would I use if I wanted to map the structure above to another JSON structure similar this one:
[{
"name":"MyName",
"accounType":"array",
"contactDetails":{
"email":"test123#test.com",
"startDate":"2020-12-01"
}
}
} ]
Thanks
In the workflow that you have built, you are essentially creating the JSON twice. The JSON Build creates the JSON structure, so if you then want to output it, select your file to output and then change the dropdown to csv with delimiter \0 and no headers.
However, try putting an output straight after your Excel file and output to JSON, the Output Tool will build the JSON for you.
In answer to your second question, build the JSON for Contact Details first as a field (remember to rename JSON to contactDetails). Then build from there with one of the above options.

How to archive old CosmosDB data to Azure Table using Azure Data Factory when CosmosDB collection documents have different properties?

I'm trying to archive old data from CosmosDB into Azure Tables but I'm very new to Azure Data Factory and I'm not sure what would be a good approach to do this. At first, I thought that this could be done with a Copy Activity but because the properties from my documents stored in the CosmosDB source vary, I'm getting mapping issues. Any idea on what would be a good approach to tackle this archiving process?
Basically, the way I want to store the data is to copy the document root properties as they are, and store the nested JSON as a serialized string.
For example, if I wanted to archive these 2 documents :
[
{
"identifier": "1st Guid here",
"Contact": {
"Name": "John Doe",
"Age": 99
}
},
{
"identifier": "2nd Guid here",
"Distributor": {
"Name": "Jane Doe",
"Phone": {
"Number": "12345",
"IsVerified": true
}
}
}
]
I'd like these documents to be stored in Azure Table like this:
identifier | Contact | Distributor
"Ist Guid here" | "{ \"Name\": \"John Doe\", \"Age\": 99 }" | null
"2nd Guid here" | null | "{\"Name\":\"Jane Doe\",\"Phone\":{\"Number\":\"12345\",\"IsVerified\":true}}"
Is this possible with the Copy Activity?
I tried using the mapping tab inside the CopyActivity, but when I try to run it I get an error saying that the dataType for one of the Nested JSON columns that are not present in the first row cannot be inferred.
Please follow my configuration in Mapping Tag.
Test output with your sample data:

Parameter whole range or table in Excel Power Query

I have an Excel sheet with a column containing IDs of items I want to retrieve
||ID||
|123|
|124|
|125|
The API I am calling can take an array of IDs as input (e.g. https://API.com/rest/items?ID=123&ID=124&ID=125....(up to 50))
and returns one JSON.
"data": [
{
"id": 123,
"fields": {
"name": "blah blah",
"description": "some description",
}
},
{
"id": 124,
"fields": {
"name": "blah bli",
"description": "some description",
}
},
{
"id": 125,
"fields": {
"name": "blah blo",
"description": "some description",
}
},...
]
}
I would like to load data from this JSON in another table or sheet.
||ID||Name||
|123|blah blah|
|124|blahblo|
|125|blahbli|
I would know how to parameterise the query by referencing single cells, but if I am retrieving 100+ items, that's a lot of work.
Can't I somehow build a query that gets each value (ID) from the table or range in one simple move?
--edit1--
Found another API endpoint where I can provide an array of IDs. (before I thought it was only possible to send one ID per request to retrieve one JSON at a time)
--edit2--
Maybe I can concatenate all IDs into the request URL already in an Excel cell and parameterise just based on that one cell. (still experimenting)
If you've got the JSON for each ID, then you just need to pull out the name part.
For example, if you have a column [JSON] with the JSON for each ID, then you can create a custom column using Json.Document([JSON])[data][name] as the formula.
You can likely combine pulling the JSON and parsing it into a single step if you'd like.

How to do field mapping in azure search for complex json objects for example nested array

I have following problem
I have a field mapping update to an index .Payload is complex where
I have:
{
"type": "abc",
"Party": [{
"Type": "abc",
"Id": "123",
"Name": "manasa",
"Phone": [{
"Type": "Office",
"Number": "12345"
}]
}]
}
And now I want to create a field for an index. The field name is phonenumber of type Collection(Edm.String)
where mapping is
{
"sourceFieldName" : "/Party/Phone/Number",
"targetFieldName" : "phonenumber",
"mappingFunction" : { "name" : "jsonArrayToStringCollection" }
}
In http post body
But still after indexing i get phone number result as null.That means the mapping went wrong.If you see the phone number in source json, it is inside a json array and it itself is an array and result needs to get stored inside a collection of a string.Is it possible how can I achieve this?
If this is not possible I atleast want field mapping till phone array ie., /Party/Phone/
If i index complete party array as a text, I get an error while running the index saying:
"Field 'partydetails' contains a term that is too large to process. The max length for UTF-8 encoded terms is 32766 bytes. The most likely cause of this error is that filtering, sorting, and/or faceting are enabled on this field, which causes the entire field value to be indexed as a single term. Please avoid the use of these options for large fields."
Can someone please help!
If party would have been a Json object than an array and phone would have been only a string array for example
{
"type": "abc",
"Party": {
"Type": "abc",
"Id": "123",
"Name": "manasa",
"Phone": [{
"12345",
"23463"
}]
}
}
Then I could have mapped
{
"sourceFieldName" : "Party/Phonenumber",
"targetFieldName" : "phonenumbers",
"mappingFunction" : { "name" : "jsonArrayToStringCollection" }
}
It map as collection of type odata EDM.string.
So to put this in better and straight forward way,
Either transform your json to something flatter (the example that I
gave above) or
Use the proper index incase if you know before inhand as
#Luis Cabrera said,
“sourceFieldName”: “/Party/0/Phone/0/Type
It is a limitation from azure search side.
Note that Party and Phone are arrays, so the field mapping you mention won't work.
You will need to index into the specific element. For example:
{
"sourceFieldName": "/Party/0/Phone/0/Type",
"targetFieldName": "firstPhoneNumberTypeOfFirstParty"
}
You may want to give that a shot.
Thanks!
Luis Cabrera | Program Manager | Azure Search

Resources