How to delay azure pipeline stage - azure

I have an azure pipeline that has 3 stages
send notification to user that resources will be deleted
request approval and wait 24 hours
delete resource
I have the pipeline almost ready but i can't work around how to wait in the second stage for 24 hours the delay task timeouts after 60 Minutes and schedule function can't work for me because the pipeilne is triggered on other pipeline completion
schedules:
- cron: "31 17 * * *"
displayName: every Every 24 hours execution for delete the aci if anything change in the branche main
branches:
include:
- main
- releases/*
exclude:
- releases/old/*
# Trigger this pipeline on model-train pipeline completion
trigger: none
resources:
containers:
- container: mlops
image: mcr.microsoft.com/mlops/python:latest
pipelines:
- pipeline: deploy-to-aci-dev
source: Deploy-to-aci-dev # Name of the triggering pipeline
trigger:
branches:
include:
- master
variables:
- template: estimation_engine-variables-template.yml
- group: devopsforai-aml-vg
stages:
- stage: 'Delay'
displayName: 'Delay for 24 Hours'
jobs:
- job: "Send_notif_sendgrid"
displayName: "send deletion notification for dev aci - sendgrid"
pool:
vmImage: 'windows-2019'
timeoutInMinutes: 60
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Install-Module -Name PSSendGrid -Force
Import-Module -Name PSSendGrid
$Parameters = #{
FromAddress = "***************************"
ToAddress = "***************************"
Subject = "Dev-Aci sera supprimé dans 24 heures"
Body = "Azure container instance dans l'environnement Dev sera supprimé dans 24 Heures"
Token = "*******************************************"
FromName = "Pipeline Alert"
ToName = "DigitRE"
}
Send-PSSendGridMail #Parameters
- stage: 'DELETE_ACI'
displayName: 'Delete Dev ACI'
condition: variables['ACI_DEPLOYMENT_NAME_DEV']
jobs:
- job: "DELETE_ACI"
displayName: "Delete Dev Aci"
container: mlops
timeoutInMinutes: 0
steps:
- task: AzureCLI#1
displayName: 'Install AzureML CLI'
inputs:
azureSubscription: '$(WORKSPACE_SVC_CONNECTION)'
scriptLocation: inlineScript
inlineScript: |
set -e # fail on error
az container delete -n $(ACI_DEPLOYMENT_NAME_DEV) -g $(RESOURCE_GROUP) --yes -y ```

the delay task timeouts after 60 Minutes
For Microsoft host agent, It is impossible to occupy VMs indefinitely.
It is already clearly tell you here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?tabs=yaml&view=azure-devops#timeouts
Setting the value to zero means that the job can run:
1, Forever on self-hosted agents
2, For 360 minutes (6 hours) on Microsoft-hosted agents with a public project and public repository
3, For 60 minutes on Microsoft-hosted agents with a private project or private repository (unless additional capacity is paid for). aid parallel jobs remove the monthly time limit and allow you to run each job for up to 360 minutes (6 hours).
For your 24 hours requirement, the only way is using self host agent, but you are using Microsoft host agent.
schedule function can't work for me because the pipeilne is triggered on other pipeline completion
I think the two should be independent of each other. Why you think the latter influenced the former?
Follow these troubleshooting steps:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml#i-defined-a-schedule-in-the-yaml-file-but-it-didnt-run-what-happened
1, Check the next few runs that Azure Pipelines has scheduled for your
pipeline. You can find these runs by selecting the Scheduled runs
action in your pipeline. The list is filtered down to only show you
the upcoming few runs over the next few days. If this doesn't meet
your expectation, it's probably the case that you've mistyped your
cron schedule, or you don't have the schedule defined in the correct
branch. Read the topic above to understand how to configure schedules.
Reevaluate your cron syntax. All the times for cron schedules are in
UTC.
2, Make a small trivial change to your YAML file and push that update
into your repository. If there was any problem in reading the
schedules from the YAML file earlier, it should be fixed now.
3, If you have any schedules defined in the UI, then your YAML schedules
aren't honored. Ensure that you don't have any UI schedules by
navigating to the editor for your pipeline and then selecting
Triggers.
4, There's a limit on the number of runs you can schedule for a pipeline.
5, If there are no changes to your code, they Azure Pipelines may not
start new runs.
Let me know whether the above can help you more clearly understand.

Related

Access Variables From Git Service Connection in a YAML Azure Pipeline

I'm attempting to create a Scheduled Azure Pipeline where I clone a self hosted BitBucket git repository using a Service Connection and mirror it to an existing Azure git repository.
A client keeps a a repository of code on their own BitBucket server. I'd like to set up a pipeline where I pull any changes from that repo on a scheduled interval into my own Azure repository so I can set up automated deployments.
I keep getting hung up on the Service Connection part of things. The Service Connection is setup as "Other Git" and contains all of the credentials I need to access the remote BitBucket server.
trigger: none
schedules:
- cron: "*/30 * * * *" # RUN EVERY 30 MINUTES
displayName: Scheduled Build
branches:
include:
- my-branch
always: true # RUNS ALWAYS REGARDLESS OF CHANGES MADE
pool:
name: Azure Pipelines
steps:
- task: AzureCLI#2
name: setVariables
displayName: Set Output Variables
continueOnError: false
inputs:
azureSubscription: "Service Connection Name"
scriptType: ps
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
Write-Host "##vso[task.setvariable variable=username;isOutput=true]$($env:username)"
Write-Host "##vso[task.setvariable variable=password;isOutput=true]$($env:password)"
- powershell: |
# Use the variables from above to pull latest from
# BitBucket then change the remote origin and push
# everything to my Azure repo
displayName: 'PowerShell Script'
When I run this I end up getting an error stating:
The pipeline is not valid. Job: setVariables input connectedServiceNameARM
expects a service connection of type AzureRM but the proviced service connection is of type git.
How can I access variables from a git service connection in my YAML pipeline?
The AzureCLI task only accepts service connections of the Azure Resource Manager type. So the git connection you are using doesn't work.
According to your needs, you can check out the repo first. There is a Bitbucket Cloud Service connection for Bitbucket repositories. You can use it to check out multiple repositories in your pipeline if you keep the yaml files in the azure repo.
Here is the sample yaml and screenshot:
resources:
repositories:
- repository: MyBitbucketRepo
type: bitbucket
endpoint: MyBitbucketServiceConnection
name: MyBitbucketOrgOrUser/MyBitbucketRepo
trigger: none
schedules:
- cron: "*/30 * * * *" # RUN EVERY 30 MINUTES
displayName: Scheduled Build
branches:
include:
- my-branch
always: true # RUNS ALWAYS REGARDLESS OF CHANGES MADE
pool:
name: Azure Pipelines
steps:
- checkout: MyBitbucketRepo
- powershell: |
# Use the variables from above to pull latest from
# BitBucket then change the remote origin and push
# everything to my Azure repo
displayName: 'PowerShell Script'

How do you delay and schedule a stage to only run the latest build in an Azure Devops yaml pipeline?

How do you schedule a stage to run at a particular time of day in a multi stage azure devops pipeline but for only the latest build?
For example, let's say I have a combined build and release pipeline...
I do 4 check-ins that day
4 pipelines, each with a build and release stage are created (1 for each check-in). The build phases complete as the code is merged
I don't want to deploy after every check-in so I schedule the release stage to run overnight
Once it reaches the release scheduled time isn't it going to kick off 4 simultaneous deployments for each pipeline that was created that day? How would I ensure it only schedules the latest? What if I wanted to ensure it also runs that deployment stage even if there have been no code changes?
I can solve this requirement with classic releases by scheduling the release to be created and deployed at a certain time with the latest artifact. It then doesn't matter how many builds were run that day.
How do you schedule a stage to run at a particular time of day in a multi stage azure devops pipeline but for only the latest build?
Based on your requirement, I suggest that you could use the cron parameters(Scheduled triggers) to set the deployment. Then it could schedule a stage to run at a particular time of day.
How would I ensure it only schedules the latest
To use the latest artifacts , you could add the [download build/pipeline artifacts task]
2 in your deployment stage and set the version as latest.
At the same time, you could use condition to determine the stage to run.
Here is my example:
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- main
pool:
vmImage: ubuntu-latest
stages:
- stage: A
condition: eq(variables['Build.Reason'], 'IndividualCI')
jobs:
- job: A1
steps:
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.Sourcesdirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
- stage: B
condition: eq(variables['Build.Reason'], 'Schedule')
jobs:
- job: B1
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'specific'
project: 'Googletest'
pipeline: '507'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(System.ArtifactsDirectory)'
Result:
When you checkin changes and trigger the build , it will run the first stage(build).
When the pipeline is triggered by Schedule , it will run the second stage(deployment).
The deployment stage will download the latest artifacts.It then doesn't matter how many builds were run that day.
I'd recommend breaking out your pipelines into two distinct processes:
Build Verification: Have this run with every check-in to verify the integrity of your code.
Nightly Deployments: Run a full build & deployment on a schedule every night.
This takes the complexity out of the scenario, while still offering you build verification at check-in.

pr not triggered when opening github PR (Azure pipeline YAML)

The goal
I'm pretty new to Azure and pipelines, and I'm trying to trigger a pipeline from a pr in Azure. The repo lives in Github.
Here is the pipeline yaml: pipeline.yml
trigger: none # I turned this off for to stop push triggers (but they work fine)
pr:
branches:
include:
- "*" # This does not trigger the pipeline
stages:
- stage: EchoTriggerStage
displayName: Echoing trigger reason
jobs:
- job: A
steps:
- script: echo 'Build reason::::' $(Build.Reason)
displayName: The build reason
# ... some more stages here triggered by PullRequests....
# ... some more stages here triggered by push (CI)....
The pr on Github looks like this:
The problem
However, the pipeline is not triggered, when the push triggers work just fine.
I have read in the docs but I can't see why this does not work.
The pipeline is working perfectly fine when I am triggering it through git push. However, when I try to trigger it with PR's from Github, nothing happens. In the code above, I tried turning off push triggers, and allow for all pr's to trigger the pipeline. Still nothing.
I do not want to delete the pipeline yet and create a new one.
Update
I updated the yaml file to work as suggested underneath. Since the pipeline actually runs through a push command now, the rest of the details of the yaml file are not relevant and are left out.
Other things I have tried
Opening new PR on Github
Closing/Reopening PR on Github
Making change to existing PR on Github
-> Still no triggering of pipeline.
You have a mistake in your pipeline. It should be like this:
trigger: none # turned off for push
pr:
- feature/automated-testing
steps:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
Please change this
- stage:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
to
- stage:
jobs:
- job:
steps:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
EDIT
I used your pipeline
trigger: none # I turned this off for to stop push triggers (but they work fine)
pr:
branches:
include:
- "*" # This does not trigger the pipeline
stages:
- stage: EchoTriggerStage
displayName: Echoing trigger reason
jobs:
- job: A
steps:
- script: echo 'Build reason::::' $(Build.Reason)
displayName: The build reason
# ... some more stages here triggered by PullRequests....
# ... some more stages here triggered by push (CI)....
and all seems to be working.
Here is PR and here build for it
I didn't do that but you can try to enforce this via branch policy. TO do that please go to repo settings and then as follow:
The solution was to go to Azure Pipelines -> Edit pipeline -> Triggers -> Enable PR validation.
You can follow below steps to troubleshooting your pipeline.
1, First you need to make sure a pipeline was created from the yaml file on azure devops Portal. See example in this tutorial.
2, Below part of your yaml file is incorrect. - script task should be under steps section.
Change:
stages:
- stage:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
To:
stages:
- stage:
jobs:
- job:
step:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
3, I saw you used template in your yaml file. Please make sure the template yaml files are in correct format. For example:
the dockerbuild-dashboard-client.yml template of yours is a step template. You need to make sure its contents is like below:
parameters:
...
steps:
- script: echo "task 1"
- script: echo "task 2"
And webapprelease-dashboard-dev-client.yml of yours is a job template. Its contents should be like below:
parameters:
name: ''
pool: ''
sign: false
jobs:
- job: ${{ parameters.name }}
pool: ${{ parameters.pool }}
steps:
- script: npm install
- script: npm test
- ${{ if eq(parameters.sign, 'true') }}:
- script: sign
4, After the pipeline was created on azure devops Portal. You can manually run this pipeline to make sure there is no error in the yaml file and the pipeline can be successfully executed.
5, If All above are checked, but the PR trigger still is not working. You can try deleting the pipeline(created on the first step) created on Azure devops portal and recreated a new pipeline from the yaml file.

azure devops pipeline CopyFiles#2 task copy's files from agent A but DownloadBuildArtifacts#0 downloads the files to agent B

i have wired behavior with the copy files from hosted agent and then downloading them back to the same agent
looks like it copies the files from agent A but the same pipeline downloading them back to Agent B
with is in another machine doing another build job that is not related
Upload from ios_docker_142_linux_slave_1
Download back to different agent ios_docker_141_linux_slave_3 , why ?
- task: CopyFiles#2
inputs:
CleanTargetFolder: 'true'
SourceFolder: '$(Agent.HomeDirectory)/../${{parameters.Folderpath}}'
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
This is an expected behavior if you are using parallel jobs. According to your screenshot, there are multiple jobs self-hosted connect , mac_agent, copy_back_files_to_self..
One agent one job at a time. If the agent is running a job, it will in busy status, and other jobs will look for idle agents to run . The parallel jobs is for running multiple jobs in multiple agents at a time.
To achieve what you want, you need to specify detail agent in your YAML file. The pool name needs to add to the name field, then you could add demands. You may try the following YAML Code:
stages:
- stage: Deploy
pool:
name: AgentPoolName(e.g. alm-aws-pool)
demands:
- agent.name -equals Agentname (e.g. deploy-05-agent1)
jobs:
- job: BuildJob
steps:
- script: echo Building!

Control Job Order in Azure DevOps Release Pipeline

I have a complicated release that spans multiple deployment groups and I am planning to use the 3rd party vsts-git-release-tag extension to tag the release. Ideally, the entire release (all jobs) would succeed first before tagging the repository.
So I am trying to work out what the best way to accomplish that is. If this were a build pipeline rather than a release pipeline, it is clear I could just arrange them using dependsOn, as follows.
jobs:
- job: Deployment_Group_1
steps:
- script: echo hello from Deployment Group 1
- job: Deployment_Group_2
steps:
- script: echo hello from Deployment Group 2
- job: Tag_Repo
steps:
- script: echo this is where I would tag the Repo
dependsOn:
- Deployment_Group_1
- Deployment_Group_2
However, there doesn't seem to be equivalent functionality (at least currently) in release pipelines as specified in this document.
Note
Running multiple jobs in parallel is supported only in build pipelines at present. It is not yet supported in release pipelines.
Although it doesn't specifically mention the dependsOn feature, there doesn't seem to be a way to utilize it in release pipelines (correct me if I am wrong).
I realize I could probably create a separate stage containing a single job and task to create the Git tag, but that feels like a hack. Is there a better way to run a specific release job after all other release jobs have completed?
Just a suggestion: you could make use of multistage pipelines which then are also very clearly represented in the Azure Devops Ui.
Stages have jobs, jobs have steps:
Example pipeline yml for this:
trigger:
batch: true
branches:
include:
- "*"
resources:
containers:
- container: ubuntu
image: ubuntu:18.04
stages:
- stage: STAGE1
jobs:
- job: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage1Job2
dependsOn: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 2"
displayName: "JOB 2"
- stage: STAGE2
dependsOn: STAGE1
jobs:
- job: PrintInfoStage2Job1
dependsOn: []
container: ubuntu
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage2Job2
container: ubuntu
dependsOn: []
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 2"
displayName: "JOB 2"
Just be sure to not miss switch on this preview feature on in your user's settings.
After creating a test project and adding several jobs to a release pipeline for it, then running it several times in a row, it appears that the order of the jobs is deterministic. That is, they always seem to run in the order that they physically appear in the portal.
I did several Google searches, and this behavior doesn't seem be documented anywhere. So, I don't know for sure if it is guaranteed. But it will probably work for my case.
Please leave a comment if there are any official sources that confirm that the job order is guaranteed.
Looking at the example that has been specified, you don't need to create different jobs for each of the steps. Every task could be added in a single job.
jobs:
- job:
steps:
- script: echo hello from Deployment Group 1
- script: echo hello from Deployment Group 2
- script: echo this is where I would tag the Repo
You can also remove Jobs > Job in the code if you want.
I was facing a similar issue with my jobs not being exected in a defined order. Also, I was referencing other templates for jobs.I was under impression that every template has to be in a new job. Later, I managed to dissolve my jobs into tasks.
Points to note:
Job runs on a sperate agent. Which you might not want, if it is just a sequence of scripts that you want to execute. Job essentially contains steps, which is a group of tasks.
Tasks in steps are executed one-by-one, so you don't have to provide order explicitly.
You can use conditions:
jobs:
- job: Deployment_Group_1
steps:
- script: echo hello from Deployment Group 1
- job: Deployment_Group_2
dependsOn: Deployment_Group_1
condition: succeeded()
steps:
- script: echo hello from Deployment Group 2
- job: Tag_Repo
steps:
- script: echo this is where I would tag the Repo
dependsOn:
- Deployment_Group_1
- Deployment_Group_2

Resources