I wrote a pipeline task with a variable passed like this
jobs:
- job: buildandpush
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo sanity check
echo $NOTING_SERVICE_ORIGIN
echo $NOTING_SERVICE_ORIGIN_2
env:
NOTING_SERVICE_ORIGIN: dummy-string-111
NOTING_SERVICE_ORIGIN_2: dummy-string-222
What I see printed is:
sanity check
https://some-url-we-used-in-the-past/
dummy-string-222
I did not ever add any variables through Azure DevOps UI. The value https://some-url-we-used-in-the-past/ is no longer anywhere in the codebase. I could not find anything interesting in Azure Pipelines docs.
Is Azure Pipelines caching NOTING_SERVICE_ORIGIN somewhere?
Ended up finding someone else already defined the variable for the pipeline. Surprising was that it took precedence over the same variable passed explicitly on the spot.
Related
Is it possible to trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline? Seems there is no configuration/property available by default as per the documentation. Just wanted to check whether there is any possible way with the pipeline completion trigger.
If the initial pipeline fails to trigger, all subsequent pipelines would logically fail to trigger. Try having your initial pipeline start with a stage that will never fail, and if that pipeline fails, you can set it to trigger the subsequent pipelines after the first one fails but gets triggered succesfully.
Is it possible to trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline?
There is no such configuration/property available to achieve trigger another pipeline from the pipeline completion trigger if there is a failure in the triggering pipeline.
To resole this issue, you could try to add powershell task to use the REST API Builds - Queue:
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?api-version=6.1-preview.7
You could check this thread for the detailed scripts.
And set this powershell task with condition Only when a previous task has failed:
In this case, regardless of whether the previous task fails, the REST API will be called at the end of the pipeline to trigger the build.
I was able to manage my requirement through the pipeline completion trigger itself. It is possible if we define stages in the triggering pipeline. I'm posting the answer if someone else looking for the same approach.
Need to define the triggering pipeline definition with stages. Also, we need to make sure that at least one stage should be successful every time. I already have few stages defined and hence this is totally matching with my requirement.
Triggering pipeline YAML definition: (pipeline name: pipeline1)
trigger: none
pr: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: stage_1
displayName: Stage-1
jobs:
- job: greeting
displayName: Greeting
steps:
- script: |
echo "Hello world!"
exit 1
- stage: stage_2
displayName: Stage-2
condition: always()
jobs:
- job: thanking
displayName: Thanking
steps:
- script: |
echo "Thank you!"
Define the pipeline completion trigger with stage filters for the triggered pipeline.
Triggered pipeline YAML definition:
trigger: none
pr: none
resources:
pipelines:
- pipeline: Pipeline_1
source: pipeline1
trigger:
stages:
- stage_2
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: greeting
steps:
- script: |
echo "Hello world!"
Then the triggered pipeline will be triggered irrespective to the stage_1 in the triggering pipeline since stage_2 will be kept successful in each run.
I have a YAML based dev ops pipeline which currently has a service connection and subscription hard coded.
I now want to deploy to either dev or live which are different subscriptions.
I also want to control who can execute these pipelines. This means I need 2 pipelines so I can manage the security of the them independently
I dont want the subscription and service connection to be parameters of the pipeline that the user must remember to enter correctly.
My current solution:
Im using YAML templates which contain most of the configuration.
I have a top level yaml file for each environment (dev.yml and live.yml).
These pass environment specific values to the template i.e. subscription
I have 2 pipelines. The dev pipeline maps to a dev.yaml file and the live pipeline maps to a live.yml
This approach means that for every combination of config I might have in the future (subscription, service connection etc) I need a new toplevel yml file.
This feels messy - Is there a better solution. What am I missing?
Pipelines using same yaml but deploying to different subscriptions
You could try to add the different subscriptions to different variable groups, then reference the variable group in a template:
Variable Template:
# variablesForDev.yml
variables:
- group: variable-group-Dev
# variablesForLive.yml
variables:
- group: variable-group-Live
dev.yml:
stages:
- stage: Dev
variables:
- template: variablesForDev.yml
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
live.yml:
stages:
- stage: live
variables:
- template: variablesForLive.yml
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
You could check this document Add & use variable groups for some more details.
Update:
This approach means that for every combination of config I might have
in the future (subscription, service connection etc) I need a new
toplevel yml file.
Since you do not want to create the toplevel yml file each time for every combination of config you might have in the future, you could try to create a variable group for those (subscription, service connection etc) instead of the toplevel yml file:
dev.yml:
variables:
- group: SubscriptionsForDev
stages:
- stage: Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
In this case, we do not need create a new toplevel yml file when we add a new pipeline, just need add a new variable group.
Besides, we could also set security for each variable group:
Update2:
I wanted to know if it was possible to avoid the 2 top level files.
If you want to avoid the 2 top level files, then the question comes back to my original answer, we need a new pipeline to contain these two yml files. We just need to add condition to each stage:
stages:
- stage: Dev
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
variables:
- group: variable-group-Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
- stage: live
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/live'))
variables:
- group: variable-group-Live
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
But if your two yaml files have no conditions that can be used as conditions, then you have to separate them.
Hope this helps.
I'm able to include the latest version of the artifact published by another pipeline (AppCIPipline) into my YAML pipeline using Conditional insertion:
name: '$(Build.SourceBranchName)-$(date:yyyyMMdd)$(rev:.r)'
resources:
pipelines:
- pipeline: AppBuildToDeploy # Required when source == Specific
source: App_Master_CI
branch: master
# buildToDeploy is a pipeline variable
${{ if ne(variables['buildToDeploy'], '') }}:
version: $(buildToDeploy) #let's leave it blank from the pipeline
project: NewHorizon
trigger: none
pool: 'Matrix' # Self hosted agent on a windows server
steps:
- download: 'AppBuildToDeploy'
patterns: '*_BuildScripts.zip'
displayName: 'Download Specified Artifacts'
I'm getting the following error: " A template expression is not allowed in this context"
Is there a way to get the version number from the user at run time and use the version, if supplied, else default to the current version?
For now the user experience is not supported yet. For now, we have to use hard-code way.
Someone has posted this feature request in DC before. You can vote for this open issue and follow it to track the request there. If it gets enough votes, the team would consider it seriously.
Attempting to trigger an Azure pipeline when another pipeline has been completed using a YAML. There's documentation indicating that you can add a pipeline resource with:
resources: # types: pipelines | builds | repositories | containers | packages
pipelines:
- pipeline: string # identifier for the pipeline resource
connection: string # service connection for pipelines from other Azure DevOps organizations
project: string # project for the source; optional for current project
source: string # source defintion of the pipeline
version: string # the pipeline run number to pick the artifact, defaults to Latest pipeline successful across all stages
branch: string # branch to pick the artiafct, optional; defaults to master branch
tags: string # picks the artifacts on from the pipeline with given tag, optional; defaults to no tags
However, I've been unable to figure out what the "source" means. For example, I have a pipeline called myproject.myprogram:
resources:
pipelines:
- pipeline: myproject.myprogram
source: XXXXXXXX
Moreover, it's unclear how you'd build based a trigger based on this.
I know that this can be done from the web-GUI, but it should be possible to do this from a YAML.
For trigger of one pipeline from another azure official docs suggest this below solution. i.e. use pipeline triggers
resources:
pipelines:
- pipeline: RELEASE_PIPELINE // any arbitrary name
source: PIPELINE_NAME. // name of the pipeline shown on azure UI portal
trigger:
branches:
include:
- dummy_branch // name of branch on which pipeline need to trigger
But actually what happens, is that it triggers two pipelines. Take an example, let suppose we have two pipelines A and B and we want to trigger B when A finishes. So in this scenario B runs 2 times, once when you do a commit (parallel with A) and second after A finishes.
To avoid this two times pipeline run problem follow the below solution
trigger: none // add this trigger value to none
resources:
pipelines:
- pipeline: RELEASE_PIPELINE // any arbitrary name
source: PIPELINE_NAME. // name of the pipeline shown on azure UI portal
trigger:
branches:
include:
- dummy_branch // name of branch on which pipeline need to trigger
By adding trigger:none second pipeline will not trigger at start commit and only trigger when first finish its job.
Hope it will help.
Microsoft documentation says that YAML is the preferred approach. So, instead of going for the build-trigger option let's understand the, little bit confusing, YAML trigger. The following tags will work from the original question and now with a bit easier documentation:
resources:
pipelines:
- pipeline: aUniqueNameHereForLocalReferenceCanBeAnything
project: projectNameNOTtheGUID
source: nameOfTheOtherPipelineNotTheDefinitionId
trigger:
branches:
include:
- master
- AnyOtherBranch
The documentation from Microsoft is confusing and the IDs are numerous. At times they want the Project GUID at times the project name. At times they want the pipeline name and at times the pipeline definition Id. But they use the same name for the variable (project and pipeline). And on top of that they write documentation that does not make it easy to guess which one to use the best way is to trial and error.
I think to avoid the confusion in other places I'm giving example of another place in the pipeline you refer to the same variables with different values. In the DownloadArtifact task, you need to use the project GUID and the pipeline definition Id as shown below:
- task: DownloadPipelineArtifact#2
inputs:
source: specific (a literal constant value not the pipeline name)
project: projectGUIDNOTtheProjectName
pipeline: numericDefinitionIdOfPipelineNotPipelineNameOrUniqueRef
runVersion: 'latest'
Just look at how they used the same variables in a different way, but both referring to a pipeline and in my case the same exact pipeline. That could create confusion and to avoid stumbling into the next issue I give it here for clarification.
The resources are not for the Build Completion trigger. according to the docs the build completion trigger not yet supported in YAML syntax.
After you create the YAML pipeline you can go to the classic editor (click on settings or variables) and there create the trigger.
Edit:
Now you need to click on the "Triggers":
And then:
Second Edit:
Microsoft added this feature also the YAML :) see here:
# this is being defined in app-ci pipeline
resources:
pipelines:
- pipeline: security-lib
source: security-lib-ci
trigger:
branches:
- releases/*
- master
In the above example, we have two pipelines - app-ci and security-lib-ci. We want the app-ci pipeline to run automatically every time a new version of the security library is built in master or a release branch.
If you're not publishing an artifact from the triggering pipeline, it won't trigger the triggered pipeline.
Also, if the defaultBranch for manual and scheduled builds in the triggered pipeline is not the same as your working branch, the triggered pipeline won't kick in at the end of the triggering pipeline execution.
I have created a minimum viable product for a pipeline trigger, and I explain better the two issues I just mentioned in this answer.
I have written few jobs in .gitlab-ci.yml and my question is similar to this one SO Question. However, the answers provided and accepted doesn't work for my scenario.
The job has an after_script section which executes when the main task completes or fails.
Problem: I send an email alert based on whether the main task succeeded or failed, but I can't find any Gitlab CI variable that indicates the result of the job to clarify in the alert email.
How can I tell, inside the after_script section, whether the main task has succeeded or failed?"
If I use when: on_failure, then my question is when can I define my when: on_success job, since these jobs will depend on the job right before the one - so I can only execute one of these. I've been trying to find variables in Gitlab Variables for this, but couldn't find.
Also, in my after script - I can write if condition, but I am checking if someone can provide better alternate soltion
Problem: I send an email alert based on whether the main task succeeded or failed, but I can't find any Gitlab CI variable that indicates the result of the job to clarify in the alert email.
Also, in my after script - I can write if condition, but I am checking if someone can provide better alternate solution
May be you will be interested in new stage to provide email notification job.
Example of full .gitlab-ci.yml:
stages:
- build
- notify
build_job:
stage: build
script:
- echo "Your awesome script"
allow_failure: false # it's here just clarify the workflow, it's false by default
email_alarm:
stage: notify
when: on_failure
script:
- echo "Your alarm notification to email"
email_success:
stage: notify
when: on_success
script:
- echo "Your success notification to email"
So, here it is: email_alarm will be executed only if build_job has been failed. And email_success will be executed only if build_job succeed finished.
If you need to provide some artifacts from build_job to email - use gitlab artifacts in build_job and email_alarm/email_success jobs.
Or you can set your custom conditions such as:
scripts:
- ./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
- if [ $FAILED ]
then ./do_something.sh
fi
Note the answer is another question - since v13.5, there's a CI_JOB_STATUS variable available in after_script.