Parsed Message
{
"date": "2022-02-04",
"customerID": 123,
"customerInfo": {
"id": 123,
"lastname": "Smith",
"firstname": "David",
"email": "testing#email.com",
},
"currency": "EUR"
}
I would like to remove the customerInfo section so the JSON looks like.
{
"date": "2022-02-04",
"customerID": 123,
"currency": "EUR"
}
How would one do this in the LogicApp. I tried remove property but could not get that working. Any suggestions would be appreciated.
I have reproduced in my environment and removed customer info using remove property as below :
Firstly, I have initialized a variable as below:
Then I used compose operation as below:
In compose input: removeProperty(variables('emo'),'customerInfo')
Then i have set the variable with output of compose as below:
Output:
Try to follow above process you will get to remove customerInfo as mine got.
You just initialize new variable from that one and populate it
"Initialize_variable": {
"type": "InitializeVariable",
"inputs": {
"variables": [ {
"name": "sensitisedMessage",
"type": "Object",
"value": { "date": #message['date'], "customerID": #message['customerID'], "currency": "#message['currency']" }
} ]
},
"runAfter": {}
}
I have not checked the format of the json in an actual logic app, but you get the idea
Related
The context
I'm new to the Azure environment, so please bare with me if the question is simple. I have called a REST API with pagination. The data has multiple arrays stored in a hierachy. The arrays contains the same value translated in different languages. So in theory if I only want one language from that array the data is already in a tabular format. However, i'm having trouble with filtering the data to the correct language in the mapping part of the copy activity.
Sample data
Below is a sample of the data. I have added 3 different 'rows' for the tabular format. There are 3 different arrays in the data:
['stage']['localization']
['disqualifyReason']['localization']
['title']['localization']
As I work for a dutch company, we only want the value where locale == 'nl-NL' to be returned.
[
{
"id": "f2597aa9-45b3-4142-a343-b1ec27fbfcea",
"email": "some#email.com",
"firstName": "Name",
"lastName": "Name",
"middleName": null,
"created": "2023-01-03T13:29:15.7452993Z",
"status": 1,
"stage": {
"localization":[
{
"locale": "da-DK",
"value": "Ansøgt"
},
{
"locale": "de-DE",
"value": "Beworben"
},
{
"locale": "en-GB",
"value": "Applied"
},
{
"locale": "nl-NL",
"value": "Gesolliciteerd"
}
]
},
"disqualifyReason": {
"localization":[
{
"locale": "nl-NL",
"value": "Geen match"
},
{
"locale": "da-DK",
"value": "Ikke et match"
},
{
"locale": "de-DE",
"value": "Absage - Screening"
},
{
"locale": "en-GB",
"value": "Not a match"
}
]
},
"source":{
"media":{
"id": "c0772eab-09dd-4c7c-86b5-ee9b65ed8398",
"title": {
"localization":[
{
"locale": "nl-NL",
"value": "Tegel voor URL"
}
]
}
}
}
},
{
"id": "a72b856e-8000-4e51-b475-9e6af5cf9e19",
"email": "some#email.com",
"firstName": "Name",
"lastName": "Name",
"middleName": null,
"created": "2023-01-03T13:29:15.7452993Z",
"status": 1,
"stage": {
"localization":[
{
"locale": "nl-NL",
"value": "Afwijzen op CV"
}
]
},
"disqualifyReason": null,
"source":{
"media":{
"id": "c0772eab-09dd-4c7c-86b5-ee9b65ed8398",
"title": {
"localization":[
{
"locale": "nl-NL",
"value": "Tegel voor URL"
}
]
}
}
}
},
{
"id": "f3898ebd-d6d6-4d9e-979e-348fe79325dc",
"email": "some#email.com",
"firstName": "Name",
"lastName": "Name",
"middleName": null,
"created": "2023-01-03T14:36:04.4517426Z",
"status": 1,
"stage": {
"localization":[
{
"locale": "nl-NL",
"value": "1e interview"
},
{
"locale": "da-DK",
"value": "1. samtale"
},
{
"locale": "en-GB",
"value": "1st Interview"
},
{
"locale": "nl-NL",
"value": "1. Interview"
}
]
},
"disqualifyReason": null,
"source":{
"media":{
"id": "c0772eab-09dd-4c7c-86b5-ee9b65ed8398",
"title": {
"localization":[
{
"locale": "nl-NL",
"value": "Tegel voor URL"
}
]
}
}
}
}
]
What did I try
Lots of google, and microsoft learn pages. However, I thought the following dynamic function would work in the mapping part of the copy activity
#filter($['stage']['localization']['locale'] == 'nl-NL'), which it doens't. I can't use the filter function in the copy activity pipeline. I believe I can save the API call to a JSON file, then use data flows to filter it out in a data flow activity, which then stores it to a tabular format. However, isn't there a way to directly filter the data in the copy activity?
Many thanks for any help!
In copy activity mapping, there is a dynamic content option.
But AFAIK, this will only apply to filter specific columns from source. But in your case, you are trying to filter the records which might not be possible using copy activity.
I believe I can save the API call to a JSON file, then use data flows
to filter it out in a data flow activity, which then stores it to a
tabular format.
Yes, using dataflows is the solution for it. And dataflows also support REST API source. You can directly use dataflows and give pagination like copy activity.
Then use filter transformation with your condition.
You will get the desired result in debug.
I'm using postman to make rest requests to the azure API to run a pipeline that is in synapse, in terms of permissions and the token I already get them and it works, the problem is that the pipeline receives 3 parameters but I don't know how to pass them, so I have this request, example:
https://hvdhgsad.dev.azuresynapse.net/pipelines/pipeName/createRun?api-version=2020-12-01
and the parameters I added them in the body:
{
"parameters": {
"p_dir": {
"type": "string",
"defaultValue": "val1"
},
"container": {
"type": "string",
"defaultValue": "val"
},
"p_folder": {
"type": "string",
"defaultValue": "val3"
}
}
}
but when I validate the run that was launched with the request I get this.
{
"id": "xxxxxxxxxxxxxxx",
"runId": "xxxxxxxxxxxxxxxxxxxxx",
"debugRunId": null,
"runGroupId": "xxxxxxxxxxxxxxxxxxxx",
"pipelineName": "xxxxxxxxxxxxxxxxx",
"parameters": {
"p_dir": "",
"p_folder": "",
"container": ""
},
"invokedBy": {
"id": "xxxxxxxxxxxxxxxxx",
"name": "Manual",
"invokedByType": "Manual"
},
"runStart": "2021-07-20T05:56:04.2468861Z",
"runEnd": "2021-07-20T05:59:10.1734654Z",
"durationInMs": 185926,
"status": "Failed",
"message": "Operation on target Data flow1 failed: {\"StatusCode\":\"DF-Executor-SourceInvalidPayload\",\"Message\":\"Job failed due to reason: Data preview, debug, and pipeline data flow execution failed because container does not exist\",\"Details\":\"\"}",
"lastUpdated": "2021-07-20T05:59:10.1734654Z",
"annotations": [],
"runDimension": {},
"isLatest": true
}
the params are empty, so I don't know what's wrong or missing.
what is the correct way to pass them???
ref: https://learn.microsoft.com/en-us/rest/api/synapse/data-plane/pipeline/create-pipeline-run#examples
Just created an account to answer this as i've had the same issue.
I resolved this by just having the name of the variable and its subsequent value in the body JSON.
e.g.
{"variable": "value", "variable": "value"}
Found this by following the documentation you had posted, under request body it passes the name of the variable and the value directly into the JSON body.
{
"OutputBlobNameList": [
"exampleoutput.csv"
]
}
This particular example is a list/array so it confused me by adding the brackets [] if you are passing string parameters this is unneeded.
I was trying to add JSON schema validation with in a Logic App using ParseJSON action.
I want to validate the existence of either of the object in the message (equivalent to XSD choice).
For instance, messages may have either of lastname or familyname.
{
"name": "Alan",
"familyname": "Turing"
}
Or
{
"name": "Alan",
"lastname": "Turing"
}
I modified the generated schema as,
{
"type": "object",
"properties": {
"name": {
"type": "string"
},
"oneOf": [
{
"lastname": {
"type": "string"
}
},
{
"familyname": {
"type": "string"
}
}
]
}
}
Logic App execution throws below error
Just to test if any other schema combination keywords works, tried to test with anyOf in place of oneOf and it fails in execution as well.
Does Logic Apps support these extended validation ? Am I missing some specific syntax here ?
If you are validating that either familyname or lastname be present then you are missing the "required" attribute.
{
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"oneOf": [
{
"familyname": {
"type": "string"
},
"required": [ "familyname" ]
},
{
"lastname": {
"type": "string"
},
"required": [ "lastname" ]
}
]
}
This will validate the JSON. If you want to get the values out in a later step you could use the coalesce function.
#coalesce(actionBody('Parse_JSON')?['familyname'], actionBody('Parse_JSON')?['lastname'])
I am trying to get data using elastic search in a python program. Currently I am getting the following data from an elastic search request. I wish to sort the data on rank:type. For example i want to sort data by raw_freq or maybe by score.
What should the query look like?
I believe it will be something using nested query. Help would be very much appreciated.
{
"data": [
{
"customer_id": 108,
"id": "Qrkz-2QBigkG_fmtME8z",
"rank": [
{
"type": "raw_freq",
"value": 2
},
{
"type": "score",
"value": 3
},
{
"type": "pmiii",
"value": 1.584962
}
],
"status": "pending",
"value": "testingFreq2"
},
],
}
Here is a simple example of how you can sort your data:
"query": {
"term": {"status": "pending"}
},
"sort": [
{"rank.type.keyword": {"order" : "desc"}}
]
I am trying to serialize generic records (expressed as JSON strings) as avro objects using the Microsoft.Hadoop.Avro library.
I've been following the tutorial for Generic Records HERE. However, the records I am trying to serialize as more complex than the sample code provided by Microsoft (Location), with nested properties inside the JSON.
Here is a sample of a record I want to serialize in Avro:
{
"deviceId": "UnitTestDevice01",
"serializationFormat": "avro",
"messageType": "state",
"messageVersion": "avrov2.0",
"arrayProp": [
{
"itemProp1": "arrayValue1",
"itemProp2": "arrayValue2"
},
{
"itemProp1": "arrayValue3",
"itemProp2": "arrayValue4"
}
]
}
For info, here is the Avro schema I can extract:
{
"type": "record",
"namespace": "xxx.avro",
"name": "MachineModel",
"fields": [{
"name": "deviceId",
"type": ["string", "null"]
}, {
"name": "serializationFormat",
"type": ["string", "null"]
}, {
"name": "messageType",
"type": ["string", "null"]
}, {
"name": "messageVersion",
"type": ["string", "null"]
}, {
"name": "array",
"type": {
"type": "array",
"items": {
"type": "record",
"name": "array_record",
"fields": [{
"name": "arrayProp1",
"type": ["string", "null"]
}, {
"name": "arrayProp2",
"type": ["string", "null"]
}]
}
}
}]
}
I have managed to extract the correct schema for this object, but I can't get the code right to take the schema and create a correct Avro record.
Can someone provide some pointers on how I can use the AvroSerializer or AvroContainer classes to produce a valid avro object using this json payload and this avro schema? The sample from Microsoft are quite simple to work with complex objects and I have not been able to find relevant samples online either.