Add steps to a build definition in AzureDevOps 2019 - azure

I'm trying to create ADOS build definitions programmatically. I found a similar question with an answer here: How to create Build Definitions through VSTS REST API
In the answer example, the steps property is empty. I included some steps (taken from a JSON gotten from another build definition using the same API). The result is that the created build definitions has no steps.
I dug into the .NET API browser, and found that there is a BuildProcess classs with a Process property which should take a DesignerProcess for TFVC pipelines (since YAML is only suported for Git repos), DesignerProcess has a Phase property which is readonly, that maybe the reason why it's not creating my steps
However I still need to find out a way to create a builds steps programmatically

However I still need to find out a way to create a builds steps programmatically.
If you don't know what to add to the step property, you can grab request body in developer console window when saving a Classic UI Pipeline.
Here are the detailed steps:
Create a Classic UI with steps you want in ADOS. (Don't save it in this step)
If you are using edge, press F12 to open developer console window. Then choose 'NetWork'.
Click Save and you will find a record called 'definitions'.
Click it and the request body is at the bottom of the page. You will find steps-related information in Process and processParameters properties.
If you are using a different browser, there might be some slight differences in step 2, 3 and 4.
Then you can edit and add the script in your REST API request body.
Here is a simple example of request body that includes a Command Line task.
"process": {
"phases": [
{
"condition": "succeeded()",
"dependencies": [],
"jobAuthorizationScope": 1,
"jobCancelTimeoutInMinutes": 0,
"jobTimeoutInMinutes": 0,
"name": "Agent job 1",
"refName": "Job_1",
"steps": [
{
"displayName": "Command Line Script",
"refName": null,
"enabled": true,
"continueOnError": false,
"timeoutInMinutes": 0,
"alwaysRun": false,
"condition": "succeeded()",
"inputs": {
"script": "echo Hello world\n",
"workingDirectory": "",
"failOnStderr": "false"
},
"overrideInputs": {},
"environment": {},
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"definitionType": "task",
"versionSpec": "2.*"
}
}
],
"target": {
"type": 1,
"demands": [],
"executionOptions": {
"type": 0
}
},
"variables": {}
}
],
"type": 1,
"target": {
"agentSpecification": {
"metadataDocument": "https://mmsprodea1.vstsmms.visualstudio.com/_apis/mms/images/VS2017/metadata",
"identifier": "vs2017-win2016",
"url": "https://mmsprodea1.vstsmms.visualstudio.com/_apis/mms/images/VS2017"
}
},
"resources": {}
}
What's more, creating YAML pipelines by REST API is not supported currently. Click this question for detailed information.

Related

Azure dashboard not found after setting via API

I am testing the setting of simple dashboard examples using Azure CLI. Using docs The structure of Azure dashboards This file is a single tile (a small square of browser window) that outputs a short message. I used the following command in VS Code terminal:
az portal dashboard import --name "mySingleTileDashboard1" --resource-group "example-resources_copy" --input-path singleTileDashboard.json
Here is the terminal output showing the .JSON script.
{
"id": "/subscriptions/xxxxxxxxxxxx/resourceGroups/example-resources_copy/providers/Microsoft.Portal/dashboards/mySingleTileDashboard1",
"lenses": {
"0": {
"metadata": null,
"order": 0,
"parts": {
"0": {
"metadata": null,
"position": {
"colSpan": 3,
"metadata": null,
"rowSpan": 2,
"x": 0,
"y": 0
}
}
}
}
},
"location": "westus",
"metadata": {
"inputs": [],
"settings": {
"content": {
"settings": {
"content": "## Dashboard Overview\r\nSingle tile example. Code lifted from azure-portal-dashboards-structure",
"subtitle": "",
"title": ""
}
}
},
"type": "Extension/HubsExtension/PartType/MarkdownPart"
},
"name": "mySingleTileDashboard1",
"resourceGroup": "example-resources_copy",
"tags": {
"hidden-title": "Created via API"
},
"type": "Microsoft.Portal/dashboards"
}
The portal shows that the dashboard has been set. The Overview shows all parameters present. But when I use "Go to dashboard" I get an error page:
Dashboard 'arm/subscriptions/xxxxxxxxxx/resourcegroups/example-resources_copy/providers/microsoft.portal/dashboards/mysingletiledashboard1' no longer exists. It was previously published to resource group 'example-resources_copy' in subscription 'xxxxxxxxxxxxx'.
I followed through on the error using Resolve errors for resource not found The Activity Log showed that the Set Dashboard Succeeded.
The doubt I had, in my script, was the following line:
"type": "Extension/HubsExtension/PartType/MarkdownPart"
The docs show the line to be Extension[azure]/ ... etc. However I tried both versions but got the same result.
Previously I have only set a blank dashboard via script. And it worked. Here it doesn't. So I have a suspicion the line with the MarkdownPart may be screwing things up.

Azure Data Factory with SP Activity - Debug and Publish fails

I've created an Azure Data Factory pipeline with one simple Stored Procedure Activity which is supposed to fetch data from a Stored Procedure residing in Azure SQL DB. The Stored Procedure accepts one input parameter. I've published these changes already.
When I click on Validate, I get the below error from where I hardly get any information:
{
"code": "BadRequest",
"message": null,
"target": "pipeline//runid/dcb92f70-0a4b-4be1-943b-5ggn68365tyc",
"details": null,
"error": null
}
When I click on Trigger now, it just says 'Failed to run pipeline' without anymore details.
My pipeline JSON is given below:
{
"name": "GetPopulationRecordsForAnalysis",
"properties": {
"description": "Gets Population Records",
"activities": [
{
"name": "GetPopulationRecords",
"type": "SqlServerStoredProcedure",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"storedProcedureName": "[dbo].[usp_GetPopulationRecords]",
"storedProcedureParameters": {
"#countryID": {
"value": "48",
"type": "Int64"
}
}
},
"linkedServiceName": {
"referenceName": "AzureSqlLinkedService",
"type": "LinkedServiceReference"
}
}
],
"annotations": [],
"lastPublishTime": "2022-08-02T13:37:27Z"
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
What am I doing wrong here?
I have figured out the issue now. The first mistake that I was doing is that I was giving the complete SP name with schema, '['character and all, usp_GetPopulationRecords works just fine. Second is that I was adding an extra '#' character before my Input parameter like how we do while running in SQL Server. That is not required here, only countryIDworks fine. Hope my answer helps.
When we connect to the Linked service in the Settings tab, “Stored Procedure name” dropdown will populate the names of the Stored Procedures present in the Database.
We should select the required Stored procedure and Upon clicking Import it will display parameter Name, Type and Value (supply Value) as below:
We need not give ‘#’ before parameter name.
Corresponding JSON will look below:
This will be Validated successfully in ADF.

How to get custom output from an executed pipeline?

I would like to be able to get custom output from an "Execute Pipeline Activity". During the execution of the invoked pipeline, I capture some information in a variable using the "Set Variable" activity. I would like to be able to use that value in the master pipeline.
I know that the master pipeline can read the invoked pipeline's name and runId using "#activity('InvokedPipeline').output," but those are the only properties available.
I have the invokable pipeline because it's configurable to be used by multiple other pipelines, assuming we can get the output from it. It currently consists of 8 activities; I would hate to have to duplicate them all across multiple pipelines just because we can't get the output from an invoked pipeline.
Reference: Execute Pipeline Activity
[
{
"name": "MasterPipeline",
"type": "Microsoft.DataFactory/factories/pipelines"
"properties": {
"description": "Uses the results of the invoked pipeline to do some further processing",
"activities": [
{
"name": "ExecuteChildPipeline",
"description": "Executes the child pipeline to get some value.",
"type": "ExecutePipeline",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"pipeline": {
"referenceName": "InvokedPipeline",
"type": "PipelineReference"
},
"waitOnCompletion": true
}
},
{
"name": "UseVariableFromInvokedPipeline",
"description": "Uses the variable returned from the invoked pipeline.",
"type": "Copy",
"dependsOn": [
{
"activity": "ExecuteChildPipeline",
"dependencyConditions": [
"Succeeded"
]
}
]
}
],
"parameters": {},
"variables": {}
}
},
{
"name": "InvokedPipeline",
"type": "Microsoft.DataFactory/factories/pipelines"
"properties": {
"description": "The child pipeline that makes some HTTP calls, gets some metadata, and sets a variable.",
"activities": [
{
"name": "SetMyVariable",
"description": "Sets a variable after some processing from other activities.",
"type": "SetVariable",
"dependsOn": [
{
"activity": "ProcessingActivity",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "MyVariable",
"value": {
"value": "#activity('ProcessingActivity').output",
"type": "Expression"
}
}
}
],
"parameters": {},
"variables": {
"MyVariable": {
"type": "String"
}
}
}
}
]
Hello Heather and thank you for your inquiry. Custom outputs are not an inbuilt feature at this time. You can request/upvote for the feature in the Azure feedback forum. For now, I do have two workarounds.
Utilizing the invoked pipeline's runID, we can query the REST API (using Web Activity) for the activity run logs, and from there, the activity outputs. However, before making the query, it is necessary to authenticate.
REST call to get the activities of a pipeline
For authentication I reccomend using the Web Activity to get an oauth2 token. The URL would be https://login.microsoftonline.com/tenantid/oauth2/token. Headers "Content-Type": "application/x-www-form-urlencoded" and body "grant_type=client_credentials&client_id=xxxx&client_secret=xxxx&resource=https://management.azure.com/". Since this request is to get credentials, the Authentication setting for this request is type 'None'. These credentials correspond to an App you create via Azure Active Directory>App Registrations. Do not forget to assign the app RBAC in Data FActory Access Control (IAM).
Another workaround, has the child pipeline write its output. It can write to a database table, or it can write to a blob (I passed the Data Factory variable to a Logic App which wrote to blob storage), or to something else of your choice. Since you are planning to use the child pipeline for many different parent pipelines, I would recommend passing the child pipeline a parameter that it uses to identify the output to the parent. That could mean a blob name, or writing the parent runID to a SQL table. This way the parent pipeline knows where to look to get the output.
just had a chat with ADF team, and the response
[10:11 PM] Mark Kromer
Brajesh Jaishwal: any plans on custom output from execute pipeline activity?
Yes, this work is on the engineering work plan

Azure Digital Twins: what does "GetOntologies" response means?

I am trying to understand the provisioning process of Digital Twins and I am reading this doc: https://learn.microsoft.com/en-us/azure/digital-twins/tutorial-facilities-setup
But I can not follow the point in this section
however, I can not understand the response for "dotnet run GetOntologies"
Anyone can help me to understand better what are those values? and how are they related to "models are available"?
In Azure Digital Twins, the Ontology entity contains a set of all types and subtypes that can be used in your application. In your example the "Required" and "Default" ontology are enabled (this is by default). If you use the REST API to see what the "Default" ontology contains you get the following:
{
"id": 2,
"name": "Default",
"loaded": true,
"types": [
{
"id": 17,
"category": "SensorDataType",
"name": "Humidity",
"disabled": false,
"logicalOrder": 0
},
{
"id": 18,
"category": "SensorDataType",
"name": "Temperature",
"disabled": false,
"logicalOrder": 0
},
{
"id": 19,
"category": "SensorDataSubtype",
"name": "RoomHumidity",
"disabled": false,
"logicalOrder": 0,
"friendlyName": "Room Humidity"
}, // etc etc
As you can see in the example above, the ontology has basic definitions for the types of sensors/spaces/data types for things related to Smart Building scenarios. The BACnet and Advanced ontologies just add different and more specific types. When you set an ontology to 'enabled', you can start using those types/subtypes. You can check them out in the REST API with:
https://your-url.your-region.azuresmartspaces.net/management/api/v1.0/ontologies/3?includes=Types

How to create a periodic copy of data from OData to Azure DocumentDB

I am trying to make a periodic copy of all the data returning from an OData query into a documentDB collection, on a daily basis.
The copy works fine using the copy wizard, which is A REALLY GREAT option for simple tasks.  Thanks for that.
What isn't working for me though:  The copy just adds data each time, and I have NO WAY that I can SEE with a documentDB sink to "pre-delete" the data in the collection (compare to the SQL sink which has sqlWriterCleanupScript, which I could set to something like Delete * from 'table').
I know I can create an Azure Batch and do what I need, but at this point, I'm not sure that it isn't better to do a function and forego the Azure Data Factory (ADF) for this move.  I'm using ADF for replicating on-prem SQL stuff just fine, because it has the writer cleanup script.
At this point, I'd like to just use DocumentDB but I don't see a way to do it given the way my data works.
Here's a look at my pipeline:
{
"name": "R-------ProjectToDocDB",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": " "
},
"sink": {
"type": "DocumentDbCollectionSink",
"nestingSeparator": ".",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
/// this is where a cleanup script would be great.
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "ProjectId:ProjectId,.....:CostClassification"
}
},
"inputs": [
{
"name": "InputDataset-shc"
}
],
"outputs": [
{
"name": "OutputDataset-shc"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "Activity-0-_Custom query_->---Project"
}
],
"start": "2017-04-26T20:13:27.683Z",
"end": "2099-12-31T05:00:00Z",
"isPaused": false,
"hubName": "r-----datafactory01_hub",
"pipelineMode": "Scheduled"
}
}
Perhaps there's an update in the pipeline that creates parity between SQL output and DocumentDB
Azure Data Factory did not support clean up script for DocDB today. It's something in our backlog. If you can describe a little bit more for the E2E scenario, could help us priorities. For example, why append to the same collection not work? Is that because there's no way to identify the incremental records after each run? For the clean up requirement, will that always be delete * or it might be based on time stamp, etc. Thanks. Before the support for clean up script was there, custom activity was the only way to workaround now, sorry.
You could use a Logic App that runs on a Timer Trigger.

Resources