download artifacts between pipelines in the same project Gitlab ci - gitlab

I am currently running 2 jobs. One running some test unit which generate a coverage.xml file from php based project and another which launch a sonarqube analysis based on that coverage file.
Here is the gitlab-ci.yml :
stages:
- tests
- sonar
tests:
stage: "tests"
image: some-image
only:
- merge_requests
script:
- script.sh
artifacts:
paths:
- var/php/xml/coverage.xml
sonarqube-scanner:
stage: "sonar"
only:
- specific_branch
image:
name: sonarsource/sonar-scanner-cli:latest
cache:
key: ${CI_JOB_NAME}
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.php.coverage.reportPaths=#with_some_parameters
allow_failure: false
dependencies:
- tests
When i run those 2 jobs with the same only condition ( both with only condition set to a specific branch) the sonar job can retrieve the artefact without any problem
As soon as i put different only condition between those 2 jobs , which is only in merge requests for my unit test and only in a specific branch for my sonar scan. In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
Is there any way to pass an artefact from one job to another that have different only conditions ?
Thanks in advance

As long as the dependency's "only" clause will ALWAYS include the only clause of the dependent, it should run. In other words, something like tests run on all merge requests and sonar only runs on some merge requests.

Is there any way to pass an artefact from one job to another that have different only conditions?
Actually the conditions itself do not matter, as long as they both evaluate to letting the job run.
In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
If it's not the branch as specified in the only condition, the sonarqube-scanner job won't run actually… Are you sure that the sonarqube-scanner job is really triggered?

Related

GitLab CI, How to make sure job execute only if the previous job did?

I have 2 stages with multiple jobs and the jobs in the first stage have some rules that tell them if the need to run or not, so what I am trying to do is to tell some of the jobs in the second stage to execute only if the relevant job in the first stage ran.
I don't want to use the same rules I used for the first stage job to prevent conflicts.
Is there a way to do that?
stages:
- build
- deploy
Build0:
stage: build
extends:
- .Build0Rules
- .Build0Make
Build1:
stage: build
extends:
- .Build1Rules
- .Build1Make
Deploy0:
stage: deploy
dependencies:
- Build0
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
dependencies:
- Build1
script:
- bash gitlab-ci/deploy1.sh
Thank you in advance :)
No you cannot specify that a job should be added to the pipeline if another job was added to the pipeline. Each job can specify whether it is added to the pipeline using only/except conditions or rules, but these are not able to reference other jobs.
It is possible to generate a pipeline yaml file and then trigger it, but I think this would not be ideal because of the amount of work involved.
stages:
- Build
- Deploy
build:
stage: Build
script:
- do something...
artifacts:
paths:
- deploy-pipeline-gitlab-ci.yml
deploy:
stage: Deploy
trigger:
include:
- artifact: deploy-pipeline-gitlab-ci.yml
job: build
strategy: depend
I would recommend using similar only/except conditions or rules on each job to build the pipeline that you want.
Yes you can. You should check the keyword needs that allow to do what you want: execute a job based on the execution of other jobs, ignoring stages order.
The documentation: https://docs.gitlab.com/ee/ci/yaml/#needs
Here is also an exemple of how to build a DAG (direct acrylic graph) using needs: https://about.gitlab.com/blog/2020/12/10/basics-of-gitlab-ci-updated/#directed-acyclic-graphs-get-faster-and-more-flexible-pipelines
In your case:
Deploy0:
stage: deploy
needs: ["Build0"]
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
needs: ["Build1"]
script:
- bash gitlab-ci/deploy1.sh
Note you can also specify multiple jobs in the needs command:
needs: ["build0", "test0", "test1"]

run gitlab jobs sequentially

I have two simple stages. (build and test). And I want jobs in the pipeline to run sequentially.
Actually, I want when I run the test job, it doesn't run until the build job was passed completely.
My gitlab file:
stages:
- build
- test
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
test:
stage: test
services:
script:
- mvn verify
- mvn jacoco:report
artifacts:
reports:
junit:
- access/target/surefire-reports/TEST-*.xml
paths:
- access/target/site/jacoco
expire_in: 1 week
only:
- merge_requests
Can I add
needs:
- build
in the test stage?
Based on the simplicity of your build file, i do not think that you actively need the needs. Based on the documentation, all stages are executed sequentially.
The pitfall you are in right now, is the only reference. The build stage will run for any branch, and for that ignore merge requests. if you add a only directive to your build job, you might get the result you are looking for like:
build:
stage: build
script:
- mvn clean package
only:
- merge_requests
- master # might be main, develop, etc. what ever your longliving branches are
This way it will not be triggered for each branch but only for merge requests and the long living branches. see the only documentation. Now the execution is not assigned to the branch but to the merge request, and you will have your expected outcome (at least what i assume)

Gitlab ci issue with passing artifacts to Downstream pipeline with trigger and needs keywords

I am working on a multi-pipeline project, and using trigger keyword to trigger a downstream pipeline, but I'm not able to pass artifacts created in the upstream project. I am using needs to get the artifact like so:
Downstream Pipeline block to get artifacts:
needs:
- project: workspace/build
job: build
ref: master
artifacts: true
Upstream Pipeline block to trigger:
build:
stage: build
artifacts:
paths:
- ./policies
expire_in: 2h
only:
- master
script:
- echo 'Test'
allow_failure: false
triggerUpstream:
stage: deploy
only:
- master
trigger:
project: workspace/deploy
But I am getting the following error:
This job depends on other jobs with expired/erased artifacts:
I'm not sure what's wrong.
Looks like there is a problem sharing artifacts between pipelines as well as between projects. It is known bug and has been reported here:
https://gitlab.com/gitlab-org/gitlab/-/issues/228586
You can find a workaround there but since it needs to add access token to project it is not the best solution.
Your upstream pipeline job "Build" is set to only store its artifacts for 2 hours (from the expire_in: 2h line. Your downstream pipeline must have run at least 2 hours later than the artifacts were created, so the artifact expired and was erased, generating that error.
To solve it you can either update the expire_in field to however long you need them to be active (so for example if you know the downstream pipeline will run up to 5 days later, set it to 5d for 5 days), or rerun the Build job to recreate the artifacts.
You can read more about the expire_in keyword and artifacts in general from the docs
It isn't a problem with expired artifacts, the error is incorrect. In my case I am able to download the artifacts as a zip directly from the UI on the executed job. My expire_in is set to 1 week yet I am still getting this message.

Gitlab CI can trigger other project pipeline stage?

I have an A project and an E2E project. I want to deploy A project trigger E2E pipeline run test but I just want the trigger test stage. we don't need trigger E2E to build deploy ...etc
e2e_tests:
stage: test
trigger:
project: project/E2E
branch: master
strategy: depend
stage: test
I have tried to use the stage in config. but got error unknown keys: stage
have any suggestions?
In your E2E project, the one that receives the trigger, you can tell a job to only run when the pipeline source is a trigger using the rules syntax:
build-from-trigger:
stage: build
when: never
rules:
- if: "$CI_COMMIT_REF_NAME == 'master' && $CI_PIPELINE_SOURCE == 'trigger'
when: always
script:
- ./build.sh # this is just an example, you'd run whatever you normally would here
The first when statement, when: never sets the default for the job. By default, this job will never run. Then using the rule syntax, we set a condition that will allow the job to run. If the CI_COMMIT_REF_NAME variable (the branch or tag name) is master AND the CI_PIPELINE_SOURCE variable (whatever kicked off this pipeline) is a trigger, then we run this job.
You can read about the when keyword here: https://docs.gitlab.com/ee/ci/yaml/#when, and you can read the rules documentation here: https://docs.gitlab.com/ee/ci/yaml/#rules

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources