Define no-sources files for Gitlab CI - gitlab

Is it possible to specify no-sources files that should not trigger the Gitlab CI?
When I make changes in README.md, the pipeline triggers, thought that file is only the documentation inside the gitlab and is not packaged in anz output artifact.

You can control when each of your jobs is added to a running pipeline using the only, except, or rules keywords. The easiest way to prevent jobs from running when only the README is changed is with except:
build_job:
stage: build
except:
changes:
- README.md
With this job syntax, if the only file that changes in a push is README.md, this job will not run. Unfortunately you can only set these rules at a job level, not the pipeline level so you'd have to put this in each of your jobs to prevent them all from running.

Related

How can I prevent Gitlab CI multiple yml includes from overriding a stage's jobs?

In my Gitlab project, I'm including multiple .yml files. One of them is remote and the other is a template provided by Gitlab for Code Quality.
The .yml configuration is written like so:
include:
- template: Code-Quality.gitlab-ci.yml
- remote: 'https://raw.githubusercontent.com/checkmarx-ltd/cx-flow/develop/templates/gitlab/v3/Checkmarx.gitlab-ci.yml'
Both of these templates are accessible. The first is located here, and the second Checkmarx one is here.
Both of these .yml configs define jobs that run in the test pipeline stage.
I'm having an issue where only the second include's jobs are running in the test stage, and the Gitlab Code Quality job is completely ignored. If I remove the external Checkmarx include, the Code Quality job runs just fine.
Normally I would just define separate stages, but since these .yml files do not belong to me, I cannot change the stage in which they run.
Is there a way to ensure the jobs all run in the test stage? If not, is there a way to override the stage a job from an external .yml runs in?
Oddly, there seems to be some sort of rules conflict between the two templates, possibly due to the variables that the checkmarx template sets. Even though the CI Lint shows that all 4 jobs should run successfully, I can reproduce your issue with the above code.
Given that it's likely a rules issue, I overrode the rules for running the code_quality job and was able to get both running within the same pipeline:
include:
- template: Code-Quality.gitlab-ci.yml
- remote: 'https://raw.githubusercontent.com/checkmarx-ltd/cx-flow/develop/templates/gitlab/v3/Checkmarx.gitlab-ci.yml'
code_quality:
rules:
- when: on_success
You can lint the above changes to confirm they're successful (though GitLab will warn you that without any workflow:rules, you'll wind up with duplicate pipelines inside MRs, which is true).
You can also see the pipeline running with both jobs here though checkmarx fails because I don't have a subscription to test it with:

Azure DevOps how to skip PublishBuildArtifacts step for PR build runs

I am using Azure DevOps and I have a single build pipeline with a number of steps including PublishBuildArtifacts defined in the azure-pipelines.yml file.
I have pointed the same pipeline for the Build Validation (Validate code by pre-merging and building pull request changes.) from the master branch's build policies option. However, for this PR build run, I don't to run certain tasks like PublishBuildArtifacts.
How can I achieve this? I can think of one way which is to create a separate pipeline for PR and also a separate azure-pipelines-pr.yml file and not adding those tasks in that file. But this feels like an approach with redundancy to me. Is there any better way to achieve this?
You can add a custom condition for the publish artifacts step:
and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
Now the step will run only when the build reason is not pull request.

In Gitlab CI, can you "pull" artifacts up from triggered jobs?

I have a job in my gitlab-ci.yml that triggers an external Pipeline that generates artifacts (in this case, badges).
I want to be able to get those artifacts and add them as artifacts to the bridge job (or some other job) on my project so that I can reference them.
My triggered job looks like this:
myjob:
stage: test
trigger:
project: other-group/other-repo
strategy: wait
I'd like something like this:
myjob:
stage: test
trigger:
project: other-group/other-repo
strategy: wait
artifacts:
# how do I get artifacts from the job(s) on other-repo?
badge.svg
Gitlab has an endpoint that can be used for the badge url for downloading the artifact from the latest Pipeline/Job for a project
https://gitlabserver/namespace/project/-/jobs/artifacts/master/raw/badge.svg?job=myjob
Is there a way to get the artifacts from the triggered job and add them to my project?
The artifacts block is for handling archiving artifacts from the current job. In order to get artifacts from a different job, you would handle that in the script section. Once you have that artifact, you can archive it normally within the artifacts block as usual.
You can use wget to download artifacts from a different project as described here
I know it a bit late but maybe this could help.
Add this to your job. It tells the job it needs the artifacts from a specific project.
(You need to be owner of the project)
needs:
- project: <FULL_PATH_TO_PROJECT> (without hosting website)
job: <JOB_NAME>
ref: <BRANCH>
artifacts: true

How does Gitlab's "pages" job work internally?

I have a Gitlab project like this (.gitlab-ci.yml):
# Sub-jobs listed as comments
stages:
- check-in-tests
# shellcheck
# pylint
# unit-tests
- several-other-things
# foo
# bar
# baz
- release
# release
# Run some shell code static tests and generate logs/badges
shellcheck:
stage: check-in-tests
script:
- bash run_shellcheck.sh
artifacts:
paths:
- logs/shellcheck.log
- logs/shellcheck.svg
# Run some python code static tests and generate logs/badges
pylint:
stage: check-in-tests
script:
- bash run_pylint.sh
artifacts:
paths:
- logs/pylint.log
- logs/pylint.svg
# <snip>
On my project page I'd like to render the .svg files produced during check-in-tests as badges.
The Gitlab badges tool requires a URL to an image file. It is incapable of loading images from URLs with query strings. Unfortunately, the syntax for accessing specific job artifacts ends in a query string. This effectively means that we can't link to job artifacts as badges.
The most popular workaround is to abuse Gitlab's pages feature to store artifacts as static content. From there we can get clean URLs to our artifacts that don't contain query strings.
My confusion involves the underlying mechanism behind the "pages" job defined in .gitlab-ci.yml. The official documentation here is very sparse. There are a million examples for deploying an actual webpage with various frameworks, but I'm not interested in any of them since I'm just using my project's "page" for file hosting.
The assumption seems to be that I want to deploy my page at the end of the pipeline. However, I want to upload the shellcheck and pylint artifacts near the beginning of the pipeline. Furthermore, I want those artifacts to be uploaded even if the pipeline stages fail.
Syntactically the pages job looks identical to any other job. There's nothing there to describe how it's magically picked up by Gitlab's internals. This leaves me with the following questions:
Can I change the stage from "deploy" to "check-in-tests", or is the deploy stage specifically part of the hidden magic that Gitlab looks for when parsing for a pages job?
If I'm tied to the deploy stage, can I re-arrange the stages to make it come earlier in the pipeline without breaking the magic?
Does the pages job deploy artifacts from the local machine (default behavior for a job), or are the listed paths coming from artifacts which have already been uploade to the Gitlab pipeline by earlier jobs?
If the pages job is only looking for artifacts locally how can I ensure that it runs on the same machine as the earlier jobs so that it finds the artifacts which they produced? Let's assume that the Gitlab executors all come from a pool with the same tag and aren't tagged individually.
Is there any chance of getting the pages job to run within the same Docker container that originally produced the artifacts?
The magic around GitLab pages is in the name of the job. It has to be named "pages", and nothing else. It is possible to move the job to different stages. As soon as the job "pages" has finished successfully, there's a special type of job that is called "pages:deploy". This job is shown in the deploy stage even if you change the stage that the "pages" job is run in.
If you have the pages job in an early stage, jobs in the later stages can fail but the "pages:deploy" job will still run and update GitLab pages.
Other than that, the "pages" job is just like a normal job in GitLab. If you need artifacts from other jobs, you can get these by using artifacts and dependencies:
https://docs.gitlab.com/ee/ci/yaml/#dependencies
The "pages" job should create a folder named "public" and give that folder as an artifact.

How do I establish manual stages in Gitlab CI?

I'd can't seem to find any documentation of manual staging in Gitlab CI in version 8.9. How do I do a manual stage such as "Deploy to Test"?
I'd like Gitlab CI to deploy a successful RPM to dev, and then once I've reviewed it, push to Test, and from there generate a release. Is this possible with Gitlab CI currently?
You can set tasks to be manual by using when: manual in the job (documentation).
So for example, if you want to want the deployment to happen at every push but give the option to manually tear down the infrastructure, this is how you would do it:
stages:
- deploy
- destroy
deploy:
stage: deploy
script:
- [STEPS TO DEPLOY]
destroy:
stage: destroy
script:
- [STEPS TO DESTROY]
when: manual
With the above config, if you go to the GitLab project > Pipelines, you should see a play button next to the last commit. When you click the play button you can see the destroy option.
Update: Manual actions were Introduced in GitLab 8.10. From the manual "Manual actions are a special type of job that are not executed automatically; they need to be explicitly started by a user. Manual actions can be started from pipeline, build, environment, and deployment views. You can execute the same manual action multiple times." An example usage of manual actions is deployment to production. The rest of this answer applies to Gitlab 8.9 and older only.
Historical Answer:
It does not appear as though manual deploy/release was available in Gitlab in 8.9.
One possibility is to have a protected branch which triggers a release. See info about protected branches here: http://doc.gitlab.com/ce/workflow/protected_branches.html
Essentially a protected branch would allow you to Create a branch (testdeploybranch) which only you would be allowed to merge code into. Whenever a commit to dev would pass the Gitlab CI tests and deploy jobs, as well as your manual review, you could merge that commit into the protected branch to trigger the release. For this branch you can then set up a special release job in Gitlab CI using the only option in the .gitlab-ci.yml job definition. Read more here: http://doc.gitlab.com/ci/yaml/README.html
So something like this:
release:
only: testdeploybranch
type: release
script: some command or script invocation to deploy to Test
This might not be exactly what you are after, but it does allow you to do manual releases from Gitlab. It does not provide an easy way to manually do the same release procedure manually for different servers. Perhaps someone else might be able to expand on this strategy.
Finally, we have Gitlab CI manual actions that were introduced in GitLab 8.10.

Resources