I've enabled GitLab container scanning by importing the template Security/Container-Scanning.gitlab-ci.yml and adding a container_scanning block
container_scanning:
stage: compliance
variables:
DOCKER_IMAGE: $MY_REPO/$CI_PROJECT_NAME:$CI_COMMIT_SHA
DOCKER_HOST: "tcp://localhost:2375"
However, I would like container_scanning job to only execute for develop branch, but the template itself defines a rule block, which prevents me from defining an only block.
Does anyone know how I can enable the container_scanning job to extend/override the rules block such that it will execute only when a commit is pushed to develop branch?
Since the template uses rules: you will have to use rules: to change the behavior of when the job is included in the pipeline.
container_scanning:
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
# ...
When you introduce your own rules: key, it overrides the existing rules: array entirely.
Related
I'm aware that is possible to trigger another pipeline from another project by adding the below commands in a gitlab-ci file:
bridge:
stage: stage_name_here
trigger:
project: path_to_another_project
branch: branch_name
strategy: depend
The problem is that the config above will trigger all jobs, and I want to trigger only 2 jobs within the pipeline.
Any idea on how to trigger only those 2 specific jobs?
You can play with rules, only and except keywords in triggered ci file, for example add this to jobs that you want to trigger:
except:
variables:
- $CI_PROJECT_ID != {{ your main project id }}
And for jobs you don't want trigger this:
except:
variables:
- $CI_PROJECT_ID == {{ your main project id }}
Or if you want use rules, add this to jobs you want to run in main project:
rules:
- if: $CI_PROJECT_ID == {{ your main project id }}
when: never
- when: always
Instead of defining a variable that needs to be passed by the upstream pipeline that is triggering the downstream pipeline, I simply added the lines below in the jobs that I don't want to run in the downstream pipeline when triggered by another job:
except:
refs:
- pipelines
source:
https://docs.gitlab.com/ee/ci/triggers/index.html#configure-cicd-jobs-to-run-in-triggered-pipelines
I have a GitLab project pipeline that triggers a downstream pipeline, GitLab multi-project pipelines.
image: docker
trigger-docs:
trigger:
project: my-group/docs
branch: feat/my-feature-branch
Is there a way for the triggered pipeline in my-group/docs to find out where it was triggered from? I checked the predefined CI variables but none seems to carry this information.
Could it be that my only option is to pass a dedicated variable from the upstream project as documented at https://docs.gitlab.com/ee/ci/pipelines/multi_project_pipelines.html#pass-cicd-variables-to-a-downstream-pipeline-by-using-the-variables-keyword?
Here's the workaround we have been using for months now; send along the custom UPSTREAM_PROJECT variable.
# Trigger a downstream build https://docs.gitlab.com/ee/ci/pipelines/multi_project_pipelines.html
docs-build:
stage: .post
variables:
UPSTREAM_PROJECT: $CI_PROJECT_PATH
# Variable expansion for 'trigger' or 'trigger:project' does not seem to be supported. If we wanted this we would have
# to work around it like so: https://gitlab.com/gitlab-org/gitlab/-/issues/10126#note_380343695
trigger: my-group/docs
I have the following content of the .gitlab-ci.yml job:
stages:
- stage1
- stage2
job1:
stage: stage1
script:
- echo "Running default stage1, pipeline_source=$CI_PIPELINE_SOURCE"
job2:
stage: stage2
rules:
- if: $CI_PIPELINE_SOURCE == "push"
- when: always
script:
- echo "Running STAGE2! pipeline_source=$CI_PIPELINE_SOURCE"
when I commit this change to a merge-request branch, it seems two pipelines are being started.
Is this a known issue in gitlab? Or do I understand something wrong here?
GitLab creates pipelines both for your branch and for the merge request. This is an "expected"[1] feature of GitLab as a consequence of using rules:. (oddly enough, when using only/except, merge request pipelines will only happen when using only: - merge_requests).
If you simply want to disable the 'pipelines for merge requests' and only run branch pipelines, you can include the default branch pipelines template, which provides a workflow: that prevents pipelines for merge requests.
include:
- template: 'Workflows/Branch-Pipelines.gitlab-ci.yml'
Additionally, you can see this answer for a workflow that will prevent duplicates between the pipelines for merge requests and branch pipelines only when a merge request is open.
[1]: I've always found this to be a quirk of GitLab and, as an administrator of GitLab for hundreds of users, I've gotten this question many many times. So, you're not alone in being surprised by this 'expected feature'
You didn't do anything wrong. This is actually intended, though it's a weird side-effect of the fact that Merge Requests have their own pipeline contexts. So when you commit to a branch that's associated with a merge request, two pipelines start:
A branch-based pipeline, with no context of the merge request
A merge request pipeline, with all the merge request variables populated (this is called a "detached" pipeline)
You can control this behavior by using a workflow keyword in your pipeline. We use the following workflow settings on our repositories:
workflow:
rules:
- if: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "schedule"
- if: $CI_COMMIT_REF_PROTECTED == "true"
The above rules will prevent the branch pipelines from running unless the branch is a protected branch (I.e., you're merging into the main branch), a tagged commit (I.e., you're releasing code), or the pipeline has been scheduled. This means that when you commit to a MR, the branch-based pipeline (#1 from the above numbers) doesn't run, and you are left with one pipeline running.
I have a YAML based dev ops pipeline which currently has a service connection and subscription hard coded.
I now want to deploy to either dev or live which are different subscriptions.
I also want to control who can execute these pipelines. This means I need 2 pipelines so I can manage the security of the them independently
I dont want the subscription and service connection to be parameters of the pipeline that the user must remember to enter correctly.
My current solution:
Im using YAML templates which contain most of the configuration.
I have a top level yaml file for each environment (dev.yml and live.yml).
These pass environment specific values to the template i.e. subscription
I have 2 pipelines. The dev pipeline maps to a dev.yaml file and the live pipeline maps to a live.yml
This approach means that for every combination of config I might have in the future (subscription, service connection etc) I need a new toplevel yml file.
This feels messy - Is there a better solution. What am I missing?
Pipelines using same yaml but deploying to different subscriptions
You could try to add the different subscriptions to different variable groups, then reference the variable group in a template:
Variable Template:
# variablesForDev.yml
variables:
- group: variable-group-Dev
# variablesForLive.yml
variables:
- group: variable-group-Live
dev.yml:
stages:
- stage: Dev
variables:
- template: variablesForDev.yml
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
live.yml:
stages:
- stage: live
variables:
- template: variablesForLive.yml
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
You could check this document Add & use variable groups for some more details.
Update:
This approach means that for every combination of config I might have
in the future (subscription, service connection etc) I need a new
toplevel yml file.
Since you do not want to create the toplevel yml file each time for every combination of config you might have in the future, you could try to create a variable group for those (subscription, service connection etc) instead of the toplevel yml file:
dev.yml:
variables:
- group: SubscriptionsForDev
stages:
- stage: Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
In this case, we do not need create a new toplevel yml file when we add a new pipeline, just need add a new variable group.
Besides, we could also set security for each variable group:
Update2:
I wanted to know if it was possible to avoid the 2 top level files.
If you want to avoid the 2 top level files, then the question comes back to my original answer, we need a new pipeline to contain these two yml files. We just need to add condition to each stage:
stages:
- stage: Dev
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/dev'))
variables:
- group: variable-group-Dev
jobs:
- job: Dev
steps:
- script: echo $(TestVarInDevGroup)
- stage: live
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/live'))
variables:
- group: variable-group-Live
jobs:
- job: live
steps:
- script: echo $(TestVarInLiveGroup)
But if your two yaml files have no conditions that can be used as conditions, then you have to separate them.
Hope this helps.
I have a triggers set up in our azure-pipelines.yml like so below:
the scriptsconn represents a connection to the default/self repo that contains the deployment pipeline yaml.
the serviceconn represents a microservice repo we are building and deploying using the template and publish tasks.
We have multiple microservices with similar build pipelines, so this approach is an attempt to lessen the amount of work needed to update these steps.
Right now the issue we're running into is two fold:
no matter what branch we specify in the scriptsconn resources -> repositories section the build triggers for every commit to every branch in the repo.
no matter how we configure the trigger for serviceconn we cannot get the build to trigger for any commit, PR created, or PR merged.
According to the link below this configuration should be pretty straighforward. Can someone point out what mistake we're making?
https://github.com/microsoft/azure-pipelines-yaml/blob/master/design/pipeline-triggers.md#repositories
resources:
repositories:
- repository: scriptsconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: $(scripts.name)
ref: $(scripts.branch)
trigger:
- develop
- repository: serviceconn
type: bitbucket
endpoint: BitbucketAzurePipelines
name: (service.name)
ref: $(service.branch)
trigger:
- develop
pr:
branches:
- develop
variables:
- name: service.path
value: $(Agent.BuildDirectory)/s/$(service.name)
- name: scripts.path
value: $(Agent.BuildDirectory)/s/$(scripts.name)
- name: scripts.output
value: $(scripts.path)/$(release.folder)/$(release.filename)
- group: DeploymentScriptVariables.Dev
stages:
- stage: Build
displayName: Build and push an image
jobs:
- job: Build
displayName: Build
pool:
name: 'Self Hosted 1804'
steps:
- checkout: scriptsconn
- checkout: serviceconn
The document you linked to is actually a design document. So it's possible\likely that not everything on that page is implemented. In the design document I also see this line:
However, triggers are not enabled on repository resource today. So, we will keep the current behavior and in the next version of YAML we will enable the triggers by default.
The current docs on the YAML schema seem to indicate that triggers are not supported on Repository Resources yet.
Just as an FYI you can see the current supported YAML schema at this url.
https://dev.azure.com/{organization}/_apis/distributedtask/yamlschema?api-version=5.1
I am not 100% sure on what you are after template wise. General suggestion, if you are going with the reusable content template workflow, you could trigger from an azure-pipelines.yml file in each of your microservice repos, consuming the reusable steps from your template. Hope that helps!