exclude project from GitLab CI/CD pipeline - gitlab

I have two sub projects in my repo.
The First one is .Net 5, The second is SharePoint solution. Both projects located in one branch.
How to configure pipeline only for one project?
I need to pass SAST test only for .Net 5 project. Now both projects are testing.
In my .gitlab-ci.yml:
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
sast:
stage: test
tags:
- docker

First off - putting multiple projects into a single repo is a bad idea.
Second, the Gitlab SAST template is meant to be "simple to use", but it's very complex under the hood and utilizes a number of different SAST tools.
You do have some configuration options available via variables, but those are mostly specific to the language/files you are scanning. For some languages there are variables available that can limit which directories/paths are being scanned.
Since you haven't provided enough information to make a specific recommendation, you can look up the variables you need for your project(s) in the official docs here: https://docs.gitlab.com/ee/user/application_security/sast/#available-cicd-variables

If each Project, can have specific branch names, you can use
only--except or rules keywords to determine when jobs should run.
sast:
stage: test
tags:
- docker
only:
- /^sast-.*$/
It is for specific jobs, not sure about whole pipeline.

Related

Gitlab: best approach for creating a manual production deployment pipeline

I have a pipeline which builds and deploys my application to staging environment.
I want to create a job which can deploy my application to production environment, but it should be run manually.
In theory I see 2 options:
Create a separate .deploy-to-prod.yml pipeline with when: manual condition and run it via "play" button. As far as I understand its impossible because I cannot run an arbitrary pipeline in Gitlab, it always runs default one. Please correct me if I am wrong.
Hence only 2nd option is available for me: I need to create additional trigger job in my default .gitlab-ci.yml and add conditions: if execution is manual and some variable is set or environment = production - then run deploy to prod, otherwise a standard job should be executed.
An example of 2nd approach can look like:
manual-deploy-to-prod:
stage: deploy
trigger:
include:
- '.deploy-to-prod.yml'
strategy: depend
rules:
- if: $MANUAL_DEPLOY_VERSION != null
when: manual
..while in standard pipeline triggers I should add following lines to avoid execution along with production deployment:
rules:
- if: $MANUAL_DEPLOY_VERSION == null
Is this a good approach?
Is it correct that only 2nd option is available for me?
What is the best practice for creating a manual production deployment pipeline?
"Best" is a very subjective term, so it's difficult to tell you which one is best for your use-case. Insteaad, let me lay out a couple of options for how you could achieve what you're attempting to do:
You could update your deploy process to use deploy.yml, then use the trigger keyword in your CI file to trigger that job for different environments. You can then use the rules keyword to control when and how different jobs are triggered. This has the benefit of re-using your deployment process that you're using for your staging environment, which is nice and DRY and ensures that your deployment is repeatable across environments. This would look like this:
deploy-to-staging:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: on_success
deploy-to-production:
stage: deploy
trigger:
include: deploy.yml
strategy: depend
when: manual
You could use the rules keyword to include your deploy-to-production job only when the job is kicked off manually from the UI. The rest of your pipeline would still execute (unless you explicitly tell it not to), but your deploy-to-prod job would only show up if you manually kicked the pipeline off. This would look like this:
deploy-to-prod:
stage: deploy
script:
- echo "I'm deploying!"
rules:
- if: $CI_PIPELINE_SOURCE == "web"
when: on_success
- when: never
You could use a separate project for your deployment pipeline. This pipeline can retrieve artifacts from your other project, but would only run its CI when you manually click on run for that project. This gives you really nice separation of concerns because you can give a separate set of permissions to that project as opposed to the code project, and it can help keep your pipeline clean if it's really complicated.
All approaches have pros and cons, simply pick the one that works best for you!

Include gitlab runner tag lists in separate yml file

We are using extensively Gitlab CI/CD pipelines for DevOps, configuration management etc. There is a long list of Gitlab runners which is now defined in gitlab-ci.yml in every project. By using "tags" and "only" we then define when to run jobs on different runners. The question is how to put the runner lists in separate files and then include them in required places? This is maybe now possible as Gitlab has evolved fast during last years. First we started with extends keyword:
gitlab-ci.yml
.devphase:
stage: build
when: manual
only:
- dev
...
Updating development environment (runner1):
extends: .devphase
tags:
- runner1
Updating development environment (runner2):
extends: .devphase
tags:
- runner2
...
This made the gitlab-ci.yml easier to read as we could list the runners separately at the end of the configuration file. However defining all these parameters for every runner is not very efficient or elegant. Somewhat later Gitlab introduced keywords "parallel" and "matrix". Now we can do this:
gitlab-ci.yml
.devphase:
stage: build
only:
- dev
tags:
- ${DEV_RUNNER}
...
Updating development environment:
extends: .devphase
parallel:
matrix:
- DEV_RUNNER: runner1
- DEV_RUNNER: runner2
...
Only one extends section and then list of runners which is pretty nice already. The next natural follow-up question is how one could put this list to a separate configuration file so that it would be easily copied and maintained without touching the main gitlab-ci.yml? I'm not sure how and where to use include keyword or is it even possible. For example this don't work (pipeline doesn't start).
gitlab-ci.yml:
.devphase:
stage: build
parallel:
matrix:
include: gitlab-ci-dev-runners.yml
only:
- dev
tags:
- ${DEV_RUNNER}
...
gitlab-ci-dev-runners.yml
- DEV_RUNNER: runner1
- DEV_RUNNER: runner2
...
Is Gitlab interpreting includes and variables in some "uncompatible" order orso? Is there any fellow coders who have faced this same problem here?

How to exclude Runners from handling a job in .gitlab-ci.yml file

I'm trying to figure out how to exclude runners from handling a job in Gitlab.
Currently we have 4 available Runners (lets call them A, B, C, D) available for building any project.
All of our apps have a build job that just compiles the project. We don't specify a runner using the tags: field in the build job, so my understanding is that Gitlab will pick any random Runner that is available to the project to handle it.
For certain apps we don't want Runners A nor B from doing the build job.
Does anyone know how to exclude runners in the yml file?
Here's the build job for an app in the yml file.
build:
image: nexus.ngidev.com:5000/system/jdk8
stage: build
script:
- mvn clean package
artifacts:
paths:
- target/*.jar
- target/lib
I know I can specify a runner or runners to handle a job using the tags: field.
But I only want 1 runner to handle the job, so I don't want to add:
tags:
- C
- D
to the yml file as that would cause C and D to build the project. I would still prefer to have Gitlab randomly pick an available Runner except for A and B.
UPDATE for clarification:
Runners A and B have tags/aliases assigned to them when they were setup so you can reference them explicitly in the yml files.
However Gitlab Runners C and D do not have tags/aliases assigned to them when they were setup. So I can't reference them directly in the yml.
Thats why I was wondering if there's a way to exclude Runners in a yml.
I think this is not directly possible to define in the .gitlab-ci.yml in the current GitLab version 14.1 as of today (tags definition in .gitlab-ci.yml).
I was having the same struggle, since I designed runners in different size with different CPU resources (XS, S, M, L, XL).
Possible workaround
Tag your runners by <RUNNER_NAME> and not-<RUNNER> tags and then use these not-<RUNNER> in the job definition to exclude runner from the job. The job will take a runner, that have all of these tags defined.
It is a tedious task to do by hand or change the configuration, so we use Ansible to automate it.
Example GitLab Runners configuration
Register with something like: gitlab-runner register --non-interactive --tag-list "<TAG_LIST>" <OTHER OPTIONS>
Runner A tag-list: A, not-B, not-C, not-D, and your original other tags (e.g. ruby, java)
Runner B tag-list: not-A, B, not-C, not-D, and your original other tags (e.g. ruby, java)
Runner C tag-list: not-A, not-B, C, not-D, no other original tags
Runner D tag-list: not-A, not-B, not-C, D, no other original tags
Example .gitlab-ci.yml job configuration
build:
image: nexus.ngidev.com:5000/system/jdk8
tags:
- not-A
- not-B
stage: build
script:
- mvn clean package
artifacts:
paths:
- target/*.jar
- target/lib
This job will be now handled by either C or D runner, since only these two both have tags not-A and not-B.
Possible improvement in the future
The tags is imo missing keyword exclude, so we can do this directly in the .gitlab-ci.yml. I would expect, that it will be added in future versions of GitLab, but I was not able to see it in the GitLab issues as an open issue yet.

Can the build policies on a pull request in Azure Devops use the yaml-files that is updated in said PR?

We have all our pipelines checked into code, but if we deliver a PR with changes in those pipelines, the PR Build Policies will run with the YAML-files in MASTER, not the ones that we want to check into master. It's basically a deadlock.
Say you want to remove a validation that makes all your PRs fail, so you make a PR but you can't merge it cause the build policies fail :P
PS: I know I can remove the policies, complete the merge, and add the policies back in as a manual job, but that is not really a good solution.
Create a separate yaml pipeline with the pre merge build steps that you then set in the PR policies for. It will always run the code from the current branch that the PR is created from.
We do it like this:
(All in same repo)
build_steps.yml - Yaml template with build steps
azure-pipelines-yml - Main pipeline that has a reference to build_steps.yml to build the project
pre_merge.yml - Secondary pipeline that is run only by PR request which has a reference to build_steps.yml so there are no differences in the build and two places to update if something changes.
Whole yaml definition:
#pre_merge.yml
trigger: none #Pipeline should never trigger on any branches. Only invoked by the policy.
variables:
- name: system.buildConfiguration
value: 'Release'
- name: system.buildPlatform
value: 'win10-x86'
- name: system.debug
value: 'false'
pool:
vmImage: 'windows-latest'
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r)
steps:
- template: build_steps.yml
And then in the policies you setup it like this:
All this applies also to classic pipelines. You need to create a seperate pre-merge build pipeline that could reference a TaskGroup with the steps used in the main build pipeline. In both cases you dont have to use a template or Taskgroup and and create the steps manually. But if the build would change in the future you have 2 places to update.
Just throwing this out there as an alternative. However; if desired can maintain one yaml pipeline that can do both the pre merge and the deployment.
Would want to create a variable to detect if the pipeline is running against the "master/main" branch similar to:
IsMainBranch: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')]
From there the variable will need to be a condition on subsequent jobs/stages. Personally I do not like this approach as it limits the portability of the pipelines. In addition it interjects another variable to account for; however, felt it fair to provide an alternative option.

How to use the same ADO pipeline for deploying to dev, staging and production

I'm using ADO to trigger tests and deployments to Google App Engine when I push to Github. Note that I use the same .yaml file to run 2 stages; the first being Test and the second, Deploy. The reason I am not using release pipelines is because the docs suggest that release pipelines are the "classic" way of doing things, and besides, having the deploy code in my yaml means I can version control it (as opposed to using ADO's UI)
I have already set up the 2 stages and they work perfectly fine. However, the deploy right now only works for my dev environment. I wish to use the same stage to deploy to my other environments (staging and production).
Here's what my .yaml looks like:
trigger:
# list of triggers
stages:
- stage 'Test'
pool: 'ubuntu-latest'
jobs:
- job: 1
- job: 2
- job: 3
- stage 'Deploy'
jobs:
- job: 'Deploy to dev'
I could potentially do a deploy to staging and production by copying the Deploy to dev job and creating similar jobs for staging and production, but I really want to avoid that because the deployment jobs are pretty huge and I'd hate to maintain 3 separate jobs that all do the same thing, albeit slightly differently. For example, one of the things that the dev jobs does is copy app.yaml files from <repo>/deploy/dev/module/app.yaml. For a deploy to staging, I'd need to have the same job, but use the staging directory: <repo>/deploy/staging/module/app.yaml. This is a direct violation of the DRY (don't repeat yourself) principle.
So I guess my questions are:
Am I doing the right thing by choosing to not use release-pipelines and instead, having the deploy code in my yaml?
Does my azure-pipelines.yaml format look okay?
Should I use variables, set them to either dev, staging, and production in a dependsOn job, then use these variables to replace the directories where I copy the files from? If I do end up using this though, setting the variables will be a challenge.
Deploying from the yaml is the right approach, you could also leverage the new "Environment" concept to add special gates and approvals before deploying. See the documentation to learn more.
As #mm8 said, using templates here would be a clean way to manage the deployment steps and keep them consistent between environments. You will find more information about it here.

Resources