"triggers": {
"manual": {
"inputs": {
"schema": {}
},
"kind": "Http",
"runtimeConfiguration": {
"concurrency": {
"maximumWaitingRuns": 99,
"runs": 1
}
},
"type": "Request"
}
}
From the above code can I increase the trigger amount from "maximumWaitingRuns": 99 to 2000+?
If yes then how?
I also want to trigger it 2000+ times using single click.
One of the workaround to make the workflow run multiple times is to use a recurrence trigger instead of an HTTP trigger.
Another workaround would be mentioning the iterations and run the workflow through Postman. After saving your request to one of your collections, navigate to your collection >> Run >> Mention the number of iterations >> Run Sample.
Related
Here is the structure, a data-drift-detected event in ML Workspace sends the event into event grid which triggers a function in Azure Function App. I want it to run only once after the data drift detection. However, I got this:
image
It runs every ~20s for a few times ://
Here is my host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.*, 4.0.0)"
}
}
and function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventGridTrigger",
"name": "event",
"direction": "in"
}
]
}
Tried changing default options in "singleton" field in host.json, but nothing changed.
Do you have any idea?
When you create an event grid trigger there you will have retry policies where you can change it 1.
Event grid trigger waits of response if it doesnt get it triggers again until it gets a response, so change it to 1. To only trigger once.
So if event grid doesnt get response it triggers again with some interval of time
If not you are sending responses, so its triggering try not to send more responses to your end point.
References taken from:
Azure Event Grid delivery and retry - Azure Event Grid | Microsoft Learn
Two Requirements are needed:
Get item path of the document in a BIM360 document management.
Get all custom attributes for that item.
For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived.
Is there a way to get both the requirements in a single api call instead of using two.
In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only.
Please suggest.
Batch-Get Custom Attributes is for the additional attributes of Document Management specific. While path in project is a general information with Data Management.
The Data Management API provides some endpoints in a format of command, which can ask the backend to process the data for bunch of items.
https://forge.autodesk.com/en/docs/data/v2/reference/http/ListItems/
This command will retrieve metadata for up to 50 specified items one time. It also supports the flag includePathInProject, but the usage is tricky and API document does not indicate it. In the response, it will tell the pathInProject of these items. It may save more time than iteration.
{
"jsonapi": {
"version": "1.0"
},
"data": {
"type": "commands",
"attributes": {
"extension": {
"type": "commands:autodesk.core:ListItems",
"version": "1.0.0" ,
"data":{
"includePathInProject":true
}
}
},
"relationships": {
"resources": {
"data": [
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:vkLfPabPTealtEYoXU6m7w"
},
{
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:bcg7gqZ6RfG4BoipBe3VEQ"
}
]
}
}
}
}
Get item path of the document in a BIM360 document management.
Is this question about getting the hiarchy of the item? e.g. rootfolder>>subfolder>>item ? With the endpoint, by specifying the query param includePathInProject=true, it will return the relative path of the item (pathInProject) in the folder structure.
https://forge.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-items-item_id-GET/
"data": {
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:xxx",
"attributes": {
"displayName": "my-issue-att.png",
"createTime": "2021-03-12T04:51:01.0000000Z",
"createUserId": "xxx",
"createUserName": "Xiaodong Liang",
"lastModifiedTime": "2021-03-12T04:51:02.0000000Z",
"lastModifiedUserId": "200902260532621",
"lastModifiedUserName": "Xiaodong Liang",
"hidden": false,
"reserved": false,
"extension": {
"type": "items:autodesk.bim360:File",
"version": "1.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/items:autodesk.bim360:File-1.0"
},
"data": {
"sourceFileName": "my-issue-att.png"
}
},
"pathInProject": "/Project Files"
}
or if you may iterate by the data of parent
"parent": {
"data": {
"type": "folders",
"id": "urn:adsk.wipprod:fs.folder:co.sdfedf8wef"
},
"links": {
"related": {
"href": "https://developer.api.autodesk.com/data/v1/projects/b.project.id.xyz/items/urn:adsk.wipprod:dm.lineage:hC6k4hndRWaeIVhIjvHu8w/parent"
}
}
},
Get all custom attributes for that item. For Req. 1, an api exists to fetch and for getting custom attributes, another api exists and data can be retrived. Is there a way to get both the requirements in a single api call instead of using two. In case of large number of records, api to retrieve item path is taking more than an hour for fetching 19000+ records and token gets expired though refesh token is used, while custom attribute api processes data in batches of 50, which completes it in 5 minutes only. Please suggest.*
Let me try to understand the question better. Firstly, two things: Custom Attributes Definitions, and Custom Attributes Values(with the documents). Could you clarify what are they with 19000+ records?
If Custom Attributes Definitions, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-custom-attribute-definitions-GET/
It supports to set limit of each call. i.e. the max limit of one call is 200, which means you can fetch 19000+ records by 95 times, while each time calling should be quick (with my experience < 10 seconds). Totally around 15 minutes, instead of more than 1 hour..
Or at your side, each call with 200 records will take much time?
If Custom Attributes Values, the API to fetch them is
https://forge.autodesk.com/en/docs/bim360/v1/reference/http/document-management-versionsbatch-get-POST/
as you know, 50 records each time. And it seems it is pretty quick at your side with 5 minutes only if fetch the values of 19000+ records?
I would like to be able to get custom output from an "Execute Pipeline Activity". During the execution of the invoked pipeline, I capture some information in a variable using the "Set Variable" activity. I would like to be able to use that value in the master pipeline.
I know that the master pipeline can read the invoked pipeline's name and runId using "#activity('InvokedPipeline').output," but those are the only properties available.
I have the invokable pipeline because it's configurable to be used by multiple other pipelines, assuming we can get the output from it. It currently consists of 8 activities; I would hate to have to duplicate them all across multiple pipelines just because we can't get the output from an invoked pipeline.
Reference: Execute Pipeline Activity
[
{
"name": "MasterPipeline",
"type": "Microsoft.DataFactory/factories/pipelines"
"properties": {
"description": "Uses the results of the invoked pipeline to do some further processing",
"activities": [
{
"name": "ExecuteChildPipeline",
"description": "Executes the child pipeline to get some value.",
"type": "ExecutePipeline",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"pipeline": {
"referenceName": "InvokedPipeline",
"type": "PipelineReference"
},
"waitOnCompletion": true
}
},
{
"name": "UseVariableFromInvokedPipeline",
"description": "Uses the variable returned from the invoked pipeline.",
"type": "Copy",
"dependsOn": [
{
"activity": "ExecuteChildPipeline",
"dependencyConditions": [
"Succeeded"
]
}
]
}
],
"parameters": {},
"variables": {}
}
},
{
"name": "InvokedPipeline",
"type": "Microsoft.DataFactory/factories/pipelines"
"properties": {
"description": "The child pipeline that makes some HTTP calls, gets some metadata, and sets a variable.",
"activities": [
{
"name": "SetMyVariable",
"description": "Sets a variable after some processing from other activities.",
"type": "SetVariable",
"dependsOn": [
{
"activity": "ProcessingActivity",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "MyVariable",
"value": {
"value": "#activity('ProcessingActivity').output",
"type": "Expression"
}
}
}
],
"parameters": {},
"variables": {
"MyVariable": {
"type": "String"
}
}
}
}
]
Hello Heather and thank you for your inquiry. Custom outputs are not an inbuilt feature at this time. You can request/upvote for the feature in the Azure feedback forum. For now, I do have two workarounds.
Utilizing the invoked pipeline's runID, we can query the REST API (using Web Activity) for the activity run logs, and from there, the activity outputs. However, before making the query, it is necessary to authenticate.
REST call to get the activities of a pipeline
For authentication I reccomend using the Web Activity to get an oauth2 token. The URL would be https://login.microsoftonline.com/tenantid/oauth2/token. Headers "Content-Type": "application/x-www-form-urlencoded" and body "grant_type=client_credentials&client_id=xxxx&client_secret=xxxx&resource=https://management.azure.com/". Since this request is to get credentials, the Authentication setting for this request is type 'None'. These credentials correspond to an App you create via Azure Active Directory>App Registrations. Do not forget to assign the app RBAC in Data FActory Access Control (IAM).
Another workaround, has the child pipeline write its output. It can write to a database table, or it can write to a blob (I passed the Data Factory variable to a Logic App which wrote to blob storage), or to something else of your choice. Since you are planning to use the child pipeline for many different parent pipelines, I would recommend passing the child pipeline a parameter that it uses to identify the output to the parent. That could mean a blob name, or writing the parent runID to a SQL table. This way the parent pipeline knows where to look to get the output.
just had a chat with ADF team, and the response
[10:11 PM] Mark Kromer
Brajesh Jaishwal: any plans on custom output from execute pipeline activity?
Yes, this work is on the engineering work plan
I am trying to create my Azure DevOps release pipeline for Azure Data Factory.
I have followed the rather cryptic guide from Microsoft (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment ) regarding adding additional parameters to the ARM template that gets generated when you do a publish (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#use-custom-parameters-with-the-resource-manager-template )
Created a arm-template-parameters-definition.json file in the route of the master branch. When I do a publish, the ARMTemplateParametersForFactory.json in the adf_publish branch remains completely unchanged. I have tried many configurations.
I have defined some Pipeline Parameters in Data Factory and want them to be configurable in my deployment pipeline. Seems like an obvious requirement to me.
Have I missed something fundamental? Help please!
The JSON is as follows:
{
"Microsoft.DataFactory/factories/pipelines": {
"*": {
"properties": {
"parameters": {
"*": "="
}
}
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {
"*": "="
},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {},
"Microsoft.DataFactory/factories/datasets": {}
}
I've been struggling with this for a few days and did not found a lot of info, so here what I've found out. You have to put the arm-template-parameters-definition.json in the configured root folder of your collaboration branch:
So in my example, it has to look like this:
If you work in a separate branch, you can test your configuration by downloading the arm templates from the data factory. When you make a change in the parameters-definition you have to reload your browser screen (f5) to refresh the configuration.
If you really want to parameterize all of the parameters in all of the pipelines, the following should work:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"*":{
"defaultValue":"="
}
}
}
}
I prefer specifying the parameters that I want to parameterize:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"LogicApp_RemoveFileFromADLSURL":{
"defaultValue":"=:-LogicApp_RemoveFileFromADLSURL:"
},
"LogicApp_RemoveBlob":{
"defaultValue":"=:-LogicApp_RemoveBlob:"
}
}
}
}
Just to clarify on top of Simon's great answer. If you have non standard git hierarchy (i.e. you move the root to a sub-folder like I have done below with "Source"), it can be confusing when the doc refers to the "repo root". Hopefully this diagram helps.
You've got the right idea, but the arm-template-parameters-definition.json file needs to follow the hierarchy of the element you want to parameterize.
Here is my pipeline activity I want to parameterize. The "url" should change based on the environment it's deployed in
{
"name": "[concat(parameters('factoryName'), '/ExecuteSPForNetPriceExpiringContractsReport')]",
"type": "Microsoft.DataFactory/factories/pipelines",
"apiVersion": "2018-06-01",
"properties": {
"description": "",
"activities": [
{
"name": "NetPriceExpiringContractsReport",
"description": "Passing values to the Logic App to generate the CSV file.",
"type": "WebActivity",
"typeProperties": {
"url": "[parameters('ExecuteSPForNetPriceExpiringContractsReport_properties_1_typeProperties')]",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"resultSet": "#activity('NetPriceExpiringContractsReportLookup').output"
}
}
}
]
}
}
Here is the arm-template-parameters-definition.json file that turns that URL into a parameter.
{
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"activities": [{
"typeProperties": {
"url": "-::string"
}
}]
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {
"*": "="
},
"Microsoft.DataFactory/factories/datasets": {
"*": "="
}
}
So basically in the pipelines of the ARM template, it looks for properties -> activities -> typeProperties -> url in the JSON and parameterizes it.
Here are the necessary steps to clear up confusion:
Add the arm-template-parameters-definition.json to your master branch.
Close and re-open your Dev ADF portal
Do a new Publish
Your ARMTemplateParametersForFactory.json will then be updated.
I have experienced similar problems with the ARMTemplateParametersForFactory.json file not being updated whenever I publish and have changed the arm-template-parameters-definition.json.
I figured that I can force update the Publish branch by doing the following:
Update the custom parameter definition file as you wish.
Delete ARMTemplateParametersForFactory.json from the Publish branch.
Refresh (F5) the Data Factory portal.
Publish.
The easiest way to validate your custom parameter .json syntax seems to be by exporting the ARM template, just as Simon mentioned.
I have a logic app with an http action.
As the reply policy allows at most 4 retries I put the activity inside a do-until loop (with max count and timeout) using the http status code as escape variable (until it is 200).
This image should make this more clear
At runtime I get this error
[EDIT]InvalidTemplate. Unable to process template language expressions for action 'HTTPAction'[EDIT]: Unable to process template language expressions for action 'HttpAction' [..] The template language expression 'equals(outputs('HttpAction')['statusCode'], 200)' cannot be evaluated because property 'statusCode' cannot be selected.
Any hints?
Thanks, Alessandro
[EDIT]The http request just works (tried with fiddler), in the workflow i suppose it just doesn't get executed due to the template error (why does it fail at runtime and not in edit mode?)[EDIT]
Going to reply for readable code.
I just made a quick POC to simulate your scenario, but I can't reproduce your issue.
Code of my Do-Until:
"Until": {
"actions": {
"HttpGetMyValues": {
"inputs": {
"headers": {
"Ocp-Apim-Subscription-Key": "#parameters('OcpApimSubscriptionKey')"
},
"method": "GET",
"uri": "https://myendpointuri"
},
"runAfter": {},
"type": "Http"
}
},
"expression": "#equals(outputs('HttpGetMyValues')['statusCode'], 200)",
"limit": {
"count": 1,
"timeout": "PT1M"
},
"runAfter": {},
"type": "Until"
}
I can save and run the Logic App without problems.
Retry policy is configurable, currently only available via code view, but we plan to enable it in designer very soon. Here's the documentation.
"retryPolicy" : {
"type": "<type-of-retry-policy>",
"interval": <retry-interval>,
"count": <number-of-retry-attempts>
}
I know this is late to the game, but the error is due to the condition that ends the 'do-until' loop being contained within the loop itself. You'll need to use a value that is outside of the loop as reference. I suggest:
Creating a variable before the 'do-until' action with the default value as '400'
Using the variable as the value the 'Status code' value needs to equal
Updating (set-variable in Logic Apps) the variable at the end of each loop iteration to equal the 'Status code'.
In the Until control:
Click Edit Advanced Mode
Replace the text with #equals(actions('HTTPAction').status, 'Succeeded')
The code means stop do until loop when the HTTPAction has completed