How can I run a job that doesn't block subsequent stages in Gitlab? - gitlab

In a project I'm running two stages with these jobs:
build
compile & test
generate sonar report
deploy
deploy to staging environment [manual]
deploy to production [manual]
The jobs in the deploy stage depend on the outputs of the compile & test job. However the generate sonar report job is not required to finish before I can start any job in the deploy stage. Nevertheless, GitLab insists that all jobs in the build phase have finished before I can launch any job in the deploy phase.
Is there a way I can tell GitLab that the generate sonar report job should not block subsequent pipeline stages? I already tried allow_failure: true on this job but this does not seem to have the desired effect. This job takes a long time to finish and I really don't want to wait for it all the time before being able to deploy.

We have similar situation, and while we do use allow_failure: true, this does not help when the Sonar job simply takes a long time to run, whether it fails or succeeds.
Since you are not wanting your deploy stage to actually be gated by the outcome of the generate sonar report job, then I suggest moving the generate sonar report job to the deploy stage, so your pipeline would become:
build
compile & test
deploy
deploy to staging environment [manual]
deploy to production [manual]
generate sonar report [allow_failure: true]
This way the generate sonar report job does not delay your deploy stage jobs
The other benefit of running generate sonar report after build & test is that you can save coverage reports from the build & test job as Gitlab job artifacts, and then have generate sonar report job consume them as dependencies, so Sonar can monitor your coverage, too
Finally, we find it useful to separate build & test into build, then test, so we can separate build failures from test failures - and we can then also run multiple test jobs in parallel, all in the same test stage, etc. Note you will need to convey the artifacts from the build job to the test job(s) via Gitlab job artifacts & dependencies if you choose to do this

From my point of view, it depends on your stage semantics. You should try to decide what is mostly important in your pipeline: clarity on stages or get the job done.
GitLab has many handy features like needs keyword you can use it to specify direct edges on the dependency graph.
stages:
- build
- deploy
build:package:
stage: build
script:
- echo "compile and test"
- mkdir -p target && echo "hello" > target/file.txt
artifacts:
paths:
- ./**/target
build:report:
stage: build
script:
- echo "consume the target artifacts"
- echo "waiting for 120 seconds to continue"
- sleep 120
- mkdir -p target/reports && echo "reporting" > target/reports/report.txt
artifacts:
paths:
- ./**/target/reports
deploy:
stage: deploy
needs: ["build:package"]
script:
- echo "deploy your package on remote site"
- cat target/file.txt

Unless I'm mistaken, this is currently not possible, and there is an open feature proposal, and another one similar to add what you are suggesting.

Related

download artifacts between pipelines in the same project Gitlab ci

I am currently running 2 jobs. One running some test unit which generate a coverage.xml file from php based project and another which launch a sonarqube analysis based on that coverage file.
Here is the gitlab-ci.yml :
stages:
- tests
- sonar
tests:
stage: "tests"
image: some-image
only:
- merge_requests
script:
- script.sh
artifacts:
paths:
- var/php/xml/coverage.xml
sonarqube-scanner:
stage: "sonar"
only:
- specific_branch
image:
name: sonarsource/sonar-scanner-cli:latest
cache:
key: ${CI_JOB_NAME}
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.php.coverage.reportPaths=#with_some_parameters
allow_failure: false
dependencies:
- tests
When i run those 2 jobs with the same only condition ( both with only condition set to a specific branch) the sonar job can retrieve the artefact without any problem
As soon as i put different only condition between those 2 jobs , which is only in merge requests for my unit test and only in a specific branch for my sonar scan. In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
Is there any way to pass an artefact from one job to another that have different only conditions ?
Thanks in advance
As long as the dependency's "only" clause will ALWAYS include the only clause of the dependent, it should run. In other words, something like tests run on all merge requests and sonar only runs on some merge requests.
Is there any way to pass an artefact from one job to another that have different only conditions?
Actually the conditions itself do not matter, as long as they both evaluate to letting the job run.
In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
If it's not the branch as specified in the only condition, the sonarqube-scanner job won't run actually… Are you sure that the sonarqube-scanner job is really triggered?

How to make a stage depend on another stage?

I have a YAML file as below. Let’s say the *.md file is committed, the build does not work, but the test works. Here how can I make the test depend on the build? Like if the build doesn’t work, the test shouldn’t work.
Thanks in advance.
build:
stage: build
script:
- echo "Build is running"
only:
changes:
- Dockerfile
- requirements.txt
- ./configs/*
test:
stage: test
script:
- echo "Test is running"
- echo "$CI_JOB_STAGE"
dependencies:
- build
That should be what stages defines
Use stages to define stages that contain groups of jobs.
stages is defined globally for the pipeline.
Use stage in a job to define which stage the job is part of.
The order of the stages items defines the execution order for jobs:
Jobs in the same stage run in parallel.
Jobs in the next stage run after the jobs from the previous stage complete successfully.
For example:
stages:
- build
- test
- deploy
All jobs in build execute in parallel.
If all jobs in build succeed, the test jobs execute in parallel.
If all jobs in test succeed, the deploy jobs execute in parallel.
If all jobs in deploy succeed, the pipeline is marked as passed.
If any job fails, the pipeline is marked as failed and jobs in later stages do not start.
Jobs in the current stage are not stopped and continue to run.
So, in your case:
stages:
- build
- test
test won't run if build fails.

Gitlab ci issue with passing artifacts to Downstream pipeline with trigger and needs keywords

I am working on a multi-pipeline project, and using trigger keyword to trigger a downstream pipeline, but I'm not able to pass artifacts created in the upstream project. I am using needs to get the artifact like so:
Downstream Pipeline block to get artifacts:
needs:
- project: workspace/build
job: build
ref: master
artifacts: true
Upstream Pipeline block to trigger:
build:
stage: build
artifacts:
paths:
- ./policies
expire_in: 2h
only:
- master
script:
- echo 'Test'
allow_failure: false
triggerUpstream:
stage: deploy
only:
- master
trigger:
project: workspace/deploy
But I am getting the following error:
This job depends on other jobs with expired/erased artifacts:
I'm not sure what's wrong.
Looks like there is a problem sharing artifacts between pipelines as well as between projects. It is known bug and has been reported here:
https://gitlab.com/gitlab-org/gitlab/-/issues/228586
You can find a workaround there but since it needs to add access token to project it is not the best solution.
Your upstream pipeline job "Build" is set to only store its artifacts for 2 hours (from the expire_in: 2h line. Your downstream pipeline must have run at least 2 hours later than the artifacts were created, so the artifact expired and was erased, generating that error.
To solve it you can either update the expire_in field to however long you need them to be active (so for example if you know the downstream pipeline will run up to 5 days later, set it to 5d for 5 days), or rerun the Build job to recreate the artifacts.
You can read more about the expire_in keyword and artifacts in general from the docs
It isn't a problem with expired artifacts, the error is incorrect. In my case I am able to download the artifacts as a zip directly from the UI on the executed job. My expire_in is set to 1 week yet I am still getting this message.

How can I create manually-run GitLab pipeline jobs?

I would like to know how to manually trigger specific jobs in a project's CI pipeline.
Since there is only one gitlab-ci.yml file, I can define many jobs to be executed one after the other sequentially. But what if I want to start a manual CI pipeline that only carries out one job?
As I understand it, every time the pipeline will run, it will run all jobs, unless I use many only and similar parameters. For instance, when I have this simple pipeline config:
stages:
- build
build:
stage: build
script:
- npm i
- npm run build
- echo "successful build"
What do I do if I want to only run an echo job that runs a simple echo "hello" script, but does only that and only when I manually run it? There are no 'triggers' for a job like that afaik.
Is this even a possibility?
Thanks for the clarification!
Apparently, the solution is pretty simple, just needed to add a when: manual paramater to the job:
echo:
stage: echo
script:
- echo 'this is a manual job'
when: manual
Once that's done, the job can be triggered independently right here:

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources