I have a YAML based dev ops pipeline which currently has a service connection and subscription hard coded.
I now want to deploy to either dev or live which are different subscriptions.
I also want to control who can execute these pipelines. This means I need 2 pipelines so I can manage the security of the them independently
I dont want the subscription and service connection to be parameters of the pipeline that the user must remember to enter correctly.
My current solution:
Im using YAML templates which contain most of the configuration.
I have a top level yaml file for each environment (dev.yml and live.yml).
These pass environment specific values to the template i.e. subscription
I have 2 pipelines. The dev pipeline maps to a dev.yaml file and the live pipeline maps to a live.yml
This approach means that for every combination of config I might have in the future (subscription, service connection etc) I need a new toplevel yml file.
This feels messy - Is there a better solution. What am I missing?
Pipelines using same yaml but deploying to different subscriptions
You could try to add the different subscriptions to different variable groups, then reference the variable group in a template:
Variable Template:
# variablesForDev.yml
variables:
- group: variable-group-Dev
# variablesForLive.yml
variables:
- group: variable-group-Live
dev.yml:
stages:
- stage: Dev
variables:
- template: variablesForDev.yml
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
live.yml:
stages:
- stage: live
variables:
- template: variablesForLive.yml
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
You could check this document Add & use variable groups for some more details.
Update:
This approach means that for every combination of config I might have
in the future (subscription, service connection etc) I need a new
toplevel yml file.
Since you do not want to create the toplevel yml file each time for every combination of config you might have in the future, you could try to create a variable group for those (subscription, service connection etc) instead of the toplevel yml file:
dev.yml:
variables:
- group: SubscriptionsForDev
stages:
- stage: Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
In this case, we do not need create a new toplevel yml file when we add a new pipeline, just need add a new variable group.
Besides, we could also set security for each variable group:
Update2:
I wanted to know if it was possible to avoid the 2 top level files.
If you want to avoid the 2 top level files, then the question comes back to my original answer, we need a new pipeline to contain these two yml files. We just need to add condition to each stage:
stages:
- stage: Dev
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
variables:
- group: variable-group-Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
- stage: live
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/live'))
variables:
- group: variable-group-Live
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
But if your two yaml files have no conditions that can be used as conditions, then you have to separate them.
Hope this helps.
Related
Is it possible to move pool specifications into a template in ADO YAML? The reason being, if in case the pool demand changes in the future, we dont need to go and edit all the 50+ main YAML files
We are not able to only specify Agent pool in Yaml template.
To achieve your requirement, you can define the pool name as variable in YAML template. Then you can use the variable as Pool name in main YAML.
Here is an example:
poolname.yml:
variables:
poolname: agentpoolname
main.yml
variables:
- template: pool.yml
pool:
name: $(poolname)
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
I wrote a pipeline task with a variable passed like this
jobs:
- job: buildandpush
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo sanity check
echo $NOTING_SERVICE_ORIGIN
echo $NOTING_SERVICE_ORIGIN_2
env:
NOTING_SERVICE_ORIGIN: dummy-string-111
NOTING_SERVICE_ORIGIN_2: dummy-string-222
What I see printed is:
sanity check
https://some-url-we-used-in-the-past/
dummy-string-222
I did not ever add any variables through Azure DevOps UI. The value https://some-url-we-used-in-the-past/ is no longer anywhere in the codebase. I could not find anything interesting in Azure Pipelines docs.
Is Azure Pipelines caching NOTING_SERVICE_ORIGIN somewhere?
Ended up finding someone else already defined the variable for the pipeline. Surprising was that it took precedence over the same variable passed explicitly on the spot.
I'm aware that is possible to trigger another pipeline from another project by adding the below commands in a gitlab-ci file:
bridge:
stage: stage_name_here
trigger:
project: path_to_another_project
branch: branch_name
strategy: depend
The problem is that the config above will trigger all jobs, and I want to trigger only 2 jobs within the pipeline.
Any idea on how to trigger only those 2 specific jobs?
You can play with rules, only and except keywords in triggered ci file, for example add this to jobs that you want to trigger:
except:
variables:
- $CI_PROJECT_ID != {{ your main project id }}
And for jobs you don't want trigger this:
except:
variables:
- $CI_PROJECT_ID == {{ your main project id }}
Or if you want use rules, add this to jobs you want to run in main project:
rules:
- if: $CI_PROJECT_ID == {{ your main project id }}
when: never
- when: always
Instead of defining a variable that needs to be passed by the upstream pipeline that is triggering the downstream pipeline, I simply added the lines below in the jobs that I don't want to run in the downstream pipeline when triggered by another job:
except:
refs:
- pipelines
source:
https://docs.gitlab.com/ee/ci/triggers/index.html#configure-cicd-jobs-to-run-in-triggered-pipelines
I am currently using Nukeeper in my Azure DevOps pipeline to automatically update my packages. It works fine and automatically creates a Pull Request when the pipeline is run. However, the Pull Requests do not have any required/optional reviewers assigned. I would like to automatically assign Optional Reviewers with Specific names to the PR.
I have looked into the Nukeeper configurations at https://nukeeper.com/basics/configuration/ but could not find any options to achieve the above.
Below is my Yaml content:
trigger: none
schedules:
- cron: "0 3 * * 0"
displayName: Weekly Sunday update
branches:
include:
- master
always: true
pool: CICDBuildPool-VS2019
steps:
- task: NuKeeper#0
displayName: NuKeeper Updates
inputs:
arguments: --change Minor --branchnameprefix "NewUpdates/" --consolidate
Does anyone know if it is feasible to automatically assign specific optional reviewers via the Nukeeper pipeline?
This Can be done through branch policy "Automatically included reviewers" option.
I have a triggers set up in our azure-pipelines.yml like so below:
the scriptsconn represents a connection to the default/self repo that contains the deployment pipeline yaml.
the serviceconn represents a microservice repo we are building and deploying using the template and publish tasks.
We have multiple microservices with similar build pipelines, so this approach is an attempt to lessen the amount of work needed to update these steps.
Right now the issue we're running into is two fold:
no matter what branch we specify in the scriptsconn resources -> repositories section the build triggers for every commit to every branch in the repo.
no matter how we configure the trigger for serviceconn we cannot get the build to trigger for any commit, PR created, or PR merged.
According to the link below this configuration should be pretty straighforward. Can someone point out what mistake we're making?
https://github.com/microsoft/azure-pipelines-yaml/blob/master/design/pipeline-triggers.md#repositories
resources:
repositories:
- repository: scriptsconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: $(scripts.name)
ref: $(scripts.branch)
trigger:
- develop
- repository: serviceconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: (service.name)
ref: $(service.branch)
trigger:
- develop
pr:
branches:
- develop
variables:
- name: service.path
value: $(Agent.BuildDirectory)/s/$(service.name)
- name: scripts.path
value: $(Agent.BuildDirectory)/s/$(scripts.name)
- name: scripts.output
value: $(scripts.path)/$(release.folder)/$(release.filename)
- group: DeploymentScriptVariables.Dev
stages:
- stage: Build
displayName: Build and push an image
jobs:
- job: Build
displayName: Build
pool:
name: 'Self Hosted 1804'
steps:
- checkout: scriptsconn
- checkout: serviceconn
The document you linked to is actually a design document. So it's possible\likely that not everything on that page is implemented. In the design document I also see this line:
However, triggers are not enabled on repository resource today. So, we will keep the current behavior and in the next version of YAML we will enable the triggers by default.
The current docs on the YAML schema seem to indicate that triggers are not supported on Repository Resources yet.
Just as an FYI you can see the current supported YAML schema at this url.
https://dev.azure.com/{organization}/_apis/distributedtask/yamlschema?api-version=5.1
I am not 100% sure on what you are after template wise. General suggestion, if you are going with the reusable content template workflow, you could trigger from an azure-pipelines.yml file in each of your microservice repos, consuming the reusable steps from your template. Hope that helps!