Azure DevOps REST API - Get agent pools for jobs? - azure

I'm using the Azure DevOps REST API to retrieve pipeline runs (aka "builds"). The build response has a bunch of good data, but it seems that the pool it reports only applies if the overall pipeline has a top-level pool defined.
For example, I have a pipeline that runs several parallel jobs, each one in a different self-hosted agent pool. But when I retrieve a build of this pipeline using the REST API, the only data available is for the pipeline's pool, which is the normal Hosted Ubuntu 1604 response you get for Microsoft-hosted builds - there's no mention of any of the self-hosted agent pools that did all the work.
I've tried drilling down into different sections (including the stage and task queries). The task level will eventually show the name of the agent used, but it's just a string, so it's not easy to infer the agent pool used unless you happen to name your agents in a specific way.
Is there any way to drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on, using the REST API?

The Build response contains a link timeline (under Links field).
You can form it like this: https://dev.azure.com/<org>/<project>/_apis/build/builds/<buildId>/Timeline
The Json response has useful info for each job/task including queueId

at present, our Rest API cannot help us drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on. This Rest Api is used to find the default settings about your pipeline.
As the work around, we can use the api: Builds - Get Build Log, and find the agent's name, like this:

Related

Application code update on Azure Virtual Machine Scale Set (VMSS)

Currently, we are hosting five websites on a Linux VM. The websites reside in their separate directories and are hosted by Nginx. The SSL is terminated at Azure Application gateway which sends the traffic to the VM. If a file is updated in a remote repository, the local copy is updated by a cron task which is a simple Bash script running git pull and few additional lines. Not all five websites need to be updated at the same time.
We created the image of the VM and provisioned a VMSS set up.
What could be the easiest or standard way of deploying the codes to the VMSS? The codes also need some manual changes each time due to client's requirements.
Have a look into Azure Durable Functions as an active scripted deployment manager.
You can configure your Durable Function to be triggered via a cron schedule, then it can orchestrate a series of tasks, monitoring for responses from the deployment targets for acceptable response before continuing each step or even waiting for user input to proceed.
By authoring your complex workflow using either of C#/JavaScript/Python/PowerShell you are only limited by your own ability to transform your manual process into a scripted one.
Azure Functions is just one option of many, it really comes down to the complexity of your workflow and the individual tasks. Octopus Deploy is a common product used to automate Azure application deployments and may have templates that match your current process, I go straight to Durable Functions when I find it too hard to configure complex steps that involve waiting for specific responses from targets before proceeding to the next step, and I want to use C# to evaluate those responses or perhaps reuse some of my application logic as part of the workflow.

My Azure DevOps pipeline job is not working, I have the maximum number of requests reached but I am new

I am new to Azure and I would like to set up a app service along with a pipeline on devops for continious integration so I decided to try it out with the free plan.
I am trying to set up a pipeline on Azure DevOps with a repository from GitHub and I haven't changed anything in the azure-pipelines.yml so I can test it out if it works. When I run the pipeline and check the default jobs, they are in queue all the time and when I view the messages in the console it says the following:
This agent request is not running because you have reached the maximum number of requests that can run for parallelism type 'Microsoft-Hosted Private'. Current position in queue: 1
I have tried Googling around but haven't found anything useful yet except that you have to send an email to a specific address (azpipelines-freetier#microsoft.com). Now I have done this, but haven't received any answer yet. Is this the correct solution or am I doing something wrong?
The root cause of the stuck issue is that the pipeline microsoft-hosted agent for public and private projects in the new organization has been restricted in the latest update.
For more detailed info, you could refer to these two docs: Private Project Pipelines, Public Project Pipelines.
In Release 183, the reasons for adding restrictions are as follows:
Over the past few months, the situation has gotten substantially worse, with a high percentage of new public projects in Azure DevOps being used for crypto mining and other activities we classify as abusive. In addition to taking an increasing amount of energy from the team, this puts our hosted agent pools under stress and degrades the experience of all our users – both open-source and paid.
Private Project:
You could send email to azpipelines-freetier#microsoft.com in order to get your free tier.
Your name
Name of the Azure DevOps organization
Public Project:
You could send email to azpipelines-ossgrant#microsoft.com in order to get your free tier.
Your name
Azure DevOps organization for which you are requesting the free grant
Links to the repositories that you plan to build
Brief description of your project
Since you have sent the email, you could wait for the response and get your free tier.

While deploying on specific machines using azure devops can I only target Pilot run machines & post that other machines?

I am currently having below scenario to crack :
I have deployment target(XYZ) on Azure devops. This XYZ group holds 20 targets, out of this 20 targets while deploying to production I want deployment should only happen at 2-3 machines first. Post successfull checks or post 2-3 days I can trigger on other set of machines.
I have no definite number of machiens that will always be my pilot machines.It may differ everytime.
As of now I have below approach , but I want to know what is the best practice that I can use to fulfil above requirement:
For now before deployment I will identify machines where pilot run will happen, I will add an extra tag to them under deployment group. Same tag will also be added under pipeline to the task.
As per this approach my deployment team will every time have to modify pipeline & deployment group, is there any better way to do this?
As indicated in the ticket you mentioned in the comment, currently as workarounds, we can filter by custom conditions or tags.
Filter by custom condition:
Create a Pipeline Variable that contains server names as value.
Add a Custom condition on Custom condition step:
and(succeeded(),contains(variables['IncludedServers'],variables['Agent.MachineName']))
Modify the variables as needed when creating the release
Filter by tags:
We can use machine tags to limit deployment to specific sets of target servers.
The tags you assign allow you to limit deployment to specific servers when the deployment group is used in a Deployment group job.

Is there a way to get the live logs of a running pipeline using Azure REST API?

I am running a build pipe line in azure that is having multiple tasks. But I have a requirement to get logs using rest API calls after triggering pipeline. I used Builds-Get Build Logs, but it listing only completed task logs and not listing ongoing task log. Is there any mechanism available to get ongoing task logs/live logs?
Is there a way to get the live logs of a running pipeline using Azure REST API?
I am afraid there is no such mechanism available to get ongoing task logs/live logs.
As we know, the Representational State Transfer (REST) APIs are service endpoints that support sets of HTTP operations (methods), which provide create, retrieve, update, or delete access to the service's resources.
The task is executed inside the agent, and the result of the execution be passed back to azure devops only after a task is completed. So, HTTP operations (methods) are triggered only when the task is completed and the results are returned, and then we could use the REST API to get the results.
So, we could not use the Azure REST API to get ongoing task logs/live logs. This is limited by the azure devops design pattern.
Hope this helps.

Is that possible to use control M to orchestrate Azure Data factory Jobs

Is that possible to use control M to orchestrate Azure Data factory Jobs?
I found this agent that can be installed on an VM:
https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bmc-software.ctm-agent-linux-arm
But I didn't find documentation about it.
Cal Control M call an REST API to run and monitor a Job? I could user Azure functions and Blobs to control it.
All Control-M components can be installed and operated on Azure (and most other cloud infrastructure). Either use the link you quote or alternatively deploy Agents using Control-M Automation API (AAPI) or a combination of the two.
So long as you are on a fairly recent version Control-M you can do most operational tasks, for example you can monitor a job like so -
ctm run jobs:status::get -s "jobid=controlm:00001"
The Control-M API is developing quickly, check out the documentation linked from here -
https://docs.bmc.com/docs/automation-api/9019100monthly/services-872868740.html#Control-MAutomationAPI-Services-ctmrunjob:status::get
Also see -
https://github.com/controlm/automation-api-quickstart http://controlm.github.io https://docs.bmc.com/docs/display/public/workloadautomation/Control-M+Automation+API+-+Services https://52.32.170.215:8443/automation-api/swagger-ui.html
At this time, I don't believe you will find any out of the box connectors for Control-M to Azure Data Factory integration. You do have some other options, though!
Proxy ADF Yourself
You can write the glue code for this, essentially being the mediator between the two.
Write a program that will invoke the ADF REST API to run a pipeline.
Details Here
After triggering the pipeline, then write the code for monitoring for status.
Details Here
Have Control-M call your code via an Agent that has access to it.
I've done this with a C# console app running on a local server, and a Control-M Agent that invokes the glue code.
Control-M Documentation here also allows a way for you to execute an Azure Function directly from Control-M. This means you could put your code in an Azure Function.
Details Here'
ALTERNATIVE METHOD
For a "no code" way, check out this Logic App connector.
Write a logic app to run the pipeline and get the pipeline run to monitor status in a loop.
Next, Control-M should be able to use a plugin to invoke the logic app.
Notes
**Note that Control-M required an HTTP Trigger for Azure Functions and Logic Apps.
**You might also be able to take advantage of the Control-M Web Services plugin. Though, in my experience, I wasn't impressed with the lack of support for different authentication methods.
Hope this helps!
I just came across this post so a bit late to the party.
Control-M includes Application Integrator which enables you to use integrations created by others and to either enhance them or build your own. You can use REST or cli to instruct Control-M what requests should be generated to an application when a job is started, during execution and monitoring and how to analyze results and collect output.
A public repository accessible from Application Integrator shows existing jobs and there is one for Data Factory. I have extended it a bit so that the the Data Factory is started and monitored to completion via REST but then a Powershell script is invoked to retrieve the pipeline run information for each activity within the pipeline.
I've posted that job and script in https://github.com/JoeGoldberg/automation-api-community-solutions/tree/master/4-ai-job-type-examples/CTM4AzureDataFactory but the README is coming later.

Resources