In Azure Devops, I have two individual yaml pipelines one for build and another for release. I would want to link the build artifact in the yaml release pipeline ,like the one we have for classic pipelines. Is there anyway we can achieve the same for yaml release pipelines.
198982-release.png
I'm from the Microsoft for Founders Hub team. If you want to be able to access your artifact across different stages in your pipeline, you can use output variables to pass it to the next stage in your YAML. Please follow the below script.
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- job: A1
steps:
- script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar
- stage: B
dependsOn: A
jobs:
- job: B1
condition: in(stageDependencies.A.A1.result, 'Succeeded', 'SucceededWithIssues', 'Skipped')
steps:
- script: echo hello from Job B1
- job: B2
condition: eq(stageDependencies.A.A1.outputs['printvar.shouldrun'], 'true')
steps:
- script: echo hello from Job B2
Related
Azure DevOps 2020 1.1
The following is what I found as minimum necessary to have a manual approval process. Is this the only way to do this at this time?
trigger:
- none
pool: "Pool1"
stages:
- stage: A
jobs:
- job: A1
- job: A2
- stage: B
jobs:
- deployment: B1
displayName: Test Job
environment: rodney-test-env
strategy:
runOnce:
deploy:
steps:
- script: |
echo 'hello world'
Article source: here
According to your description, currently we can only use the environments for a manual approval in the pipeline.
If your pipeline is trigger from pull request, you can try to use the branch policy with the reviewers to check if it meets your requirements.
For more information, you could refer to the approvals.
I have azure-pipeline.yml, and use the same file to run on DEV, SIT, UAT environments using if elif conditions, to identify the branch name and environment to be deployed on,
Example:
if [[ $BRANCH_NAME == "develop" ]]
then
env="app-dev"
ns="app-dev"
elif [[ $TAG_NAME == RC-UAT-* ]]
then
env="app-uat"
ns="app-uat"
I want to know the best way possible to do the same, where I can templatize my yml file and identify which branch is triggered, and pipeline is running for which environment.
You could use "Condition" in your pipeline. For details, please refer to this document: Conditions - Azure Pipelines | Microsoft Docs.
According your requirements, I also tried the following YAML example:
variables:
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/dev')]
stages:
- stage: A
jobs:
- job: A1
steps:
- script: echo Hello Stage A!
- stage: B
condition: and(succeeded(), eq(variables.isMain, 'true'))
jobs:
- job: B1
steps:
- script: echo Hello Stage B!
- script: echo $(isMain)
- stage: C
condition: and(succeeded(), eq(variables.isDev, 'true'))
dependsOn: []
jobs:
- job: C1
steps:
- script: echo Hello Stage C!
- script: echo $(isMain)
When I trigger pipeline with the 'main' branch, the stage C will be skipped.
When I trigger pipeline with the 'dev' branch, the stage B will be skipped.
Therefore, you could adapt the above example to design you "if else" pipeline.
I'm trying to integrate a QA pipeline in Azure DevOps that triggers during a dev pipeline. The dev pipeline has 4 environments that the build is deploying to, where each environment is a stage in the dev pipeline
Currently, I'm able to trigger the QA builds for each environment using 4 separate QA pipelines using syntax similar to this in each pipeline:
resources:
pipelines:
- pipeline: Dev_Env_1
source: Dev
trigger:
stages:
- Env_1
My goal is to only have one QA pipeline that is triggered multiple times by the dev pipeline when it completes each stage. It feels like syntax in the yml file like this should work:
resources:
pipelines:
- pipeline: Dev_Env_1
source: Dev
trigger:
stages:
- Env_1
- pipeline: Dev_Env_2
source: Dev
trigger:
stages:
- Env_2
However, this only triggers after Env_1 is completed, when I would like a build triggered for the completion of the Env_1 and Env_2 stages in the Dev pipeline.
Is there a way to do this without drastically changing the way either pipeline currently works?
The below steps can help you achieve your requirements.
For example,
Dev Pipeline
#Dev Pipeline
trigger:
- none
pool:
vmImage: 'windows-latest'
stages:
- stage: Env_1
displayName: Env_1
jobs:
- job:
steps:
- task: CmdLine#2
inputs:
script: |
echo Stage1
- stage: Env_2
displayName: Env_2
jobs:
- job:
steps:
- task: CmdLine#2
inputs:
script: |
echo Stage2
- stage: Env_3
displayName: Env_3
jobs:
- job:
steps:
- task: CmdLine#2
inputs:
script: |
echo Stage3
- stage: Env_4
displayName: Env_4
jobs:
- job:
steps:
- task: CmdLine#2
inputs:
script: |
echo Stage4
1, Create an incoming webhook service connection, and an incoming service webhook that is triggered by the complete and success of stages of the Dev Pipeline.
incoming service hook URI: https://dev.azure.com/<ADO Organization>/_apis/public/distributedtask/webhooks/<WebHook Name>?api-version=6.0-preview
Official document:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/pipelines/sprint-172-update#generic-webhook-based-triggers-for-yaml-pipelines
After that, write QA pipeline as below:
QA Pipeline
# QA pipeline
trigger:
- none
resources:
webhooks:
- webhook: bowmantest
connection: BowmanIncommingWebHook
pool:
vmImage: 'windows-latest'
steps:
- task: CmdLine#2
inputs:
script: |
echo QA pipeline
Everything works fine:
i need some help on running selected envirorment from azure devops pipeline.
i used codeway.yml file which consisting of 3 env. like dev/qa/pro.
so when i run pipeline all the env are running one after other.
But my requirement is i want to run only selected env (other env should not run) with choice parameter like in jenkins
also needed "proceed to qa or pro" such a condition input needed.
could some help sample code for that.
Thanks in advance.
You can define a variable in the pipeline, and after that use this variable to execute the tasks stages that you want, previously consider that you need to have an starter point, here I use the stage info:
variables:
isDevelop: true
isQA: false
isProd: false
- stage: Info
displayName: 'Information for deploy'
jobs:
- whatever
steps:
- stage: Dev
displayName: 'Deploy to Dev'
dependsOn: Info
condition: and(succeeded(), eq(variables['isDevelop'], true))
jobs:
- whatever
- stage: QA
displayName: 'Deploy to QA'
dependsOn: Info
condition: and(succeeded(), eq(variables['isQA'], true))
jobs:
- whatever
- stage: Prod
displayName: 'Deploy to Prod'
dependsOn: Info
condition: and(succeeded(), eq(variables['isProd'], true))
jobs:
- whatever
The variables can be calculated or alse you can get the value from the variables pipeline definition in azure, in the code below would be calculated using the sourcebranch, consider that is an example, and you need to adapt it to your project:
variables:
isDevelop: $[eq(variables['sourceBranch'], 'refs/heads/develop')]
isQA: $[eq(variables['sourceBranch'], 'refs/heads/qa')]
isProd: $[eq(variables['sourceBranch'], 'refs/heads/prod')]
I have a YAML file that resembles the following:
stages:
- stage: A
pool:
vmImage: 'windows-2019'
jobs:
- job: a
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#edits file "$(System.DefaultWorkingDirectory)/myfolder/myfile.json"
- stage: B
dependsOn: A
pool:
vmImage: 'windows-2019'
jobs:
- job: b
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#uses file "$(System.DefaultWorkingDirectory)/myfolder/myfile.json"
I have split my pipeline into two stages; A: edits a file in a repository and B: works with the edited file.
My problem is, the files seem to get reset between stages. Is there any way of keeping the changes throughout the stages, rather than resetting them?
I don't want to publish artifacts and so on as in stage b, although not in the YAML above, I am running multiple PowerShell script files that contain hardcoded file paths and it would just be a mess overwriting the file paths to point at the artifacts directory before running the stage.
An
Based on my test , the cause of this issue is that the two stages run on the different Agent machines.
For example: Stage A -> Agent machine name: 'fv-az146' , Stage B -> Agent machine name: 'fv-az151'
You could check the agent information in Build log -> Initialize job.
Is there any way of keeping the changes throughout the stages, rather
than resetting them?
Since you don't want to publish artifacts, you could try to use Self-hosted agents to run two stages.
You need to add demands to the agent to ensure that the stages run on the same Self-hosted agent.
According to this doc:
The demands keyword is supported by private pools.
We couldn't specify specific "Agent Capabilities" in Microsoft-hosted agents. So we couldn't ensure that two stages can run on the same agent
Update:
Since the two stages are running on the same agent, the "check out" step in Stage B could override the files in Stage A.
So you also need to add the - checkout: none in Stage B.
Here is the updated Yaml template:
stages:
- stage: A
pool:
name: Pool name
demands:
- Agent.Name -equals agentname1
jobs:
- job: a
steps:
- task: PowerShell#2
...
- stage: B
dependsOn: A
pool:
name: Pool name
demands:
- Agent.Name -equals agentname1
jobs:
- job: b
steps:
- checkout: none
- task: PowerShell#2
...
The overall workflow: the Stage A edits the files and save it to $(System.DefaultWorkingDirectory).
Then Stage B could directly use the files in $(System.DefaultWorkingDirectory).
The files in $(System.DefaultWorkingDirectory) will keep changes in Stage A and B.