Gitlab ci issue with passing artifacts to Downstream pipeline with trigger and needs keywords - gitlab

I am working on a multi-pipeline project, and using trigger keyword to trigger a downstream pipeline, but I'm not able to pass artifacts created in the upstream project. I am using needs to get the artifact like so:
Downstream Pipeline block to get artifacts:
needs:
- project: workspace/build
job: build
ref: master
artifacts: true
Upstream Pipeline block to trigger:
build:
stage: build
artifacts:
paths:
- ./policies
expire_in: 2h
only:
- master
script:
- echo 'Test'
allow_failure: false
triggerUpstream:
stage: deploy
only:
- master
trigger:
project: workspace/deploy
But I am getting the following error:
This job depends on other jobs with expired/erased artifacts:
I'm not sure what's wrong.

Looks like there is a problem sharing artifacts between pipelines as well as between projects. It is known bug and has been reported here:
https://gitlab.com/gitlab-org/gitlab/-/issues/228586
You can find a workaround there but since it needs to add access token to project it is not the best solution.

Your upstream pipeline job "Build" is set to only store its artifacts for 2 hours (from the expire_in: 2h line. Your downstream pipeline must have run at least 2 hours later than the artifacts were created, so the artifact expired and was erased, generating that error.
To solve it you can either update the expire_in field to however long you need them to be active (so for example if you know the downstream pipeline will run up to 5 days later, set it to 5d for 5 days), or rerun the Build job to recreate the artifacts.
You can read more about the expire_in keyword and artifacts in general from the docs

It isn't a problem with expired artifacts, the error is incorrect. In my case I am able to download the artifacts as a zip directly from the UI on the executed job. My expire_in is set to 1 week yet I am still getting this message.

Related

How to access artifacts in next stage in GitLab CI/CD

I am trying to build GitLab CI/CD for the first time. I have two stages build and deploy The job in the build stage produce artifacts. And then the job in deploy stage wants to upload those artifacts to AWS S3. Both the jobs are using same runner but different docker image.
default:
tags:
- dev-runner
stages:
- build
- deploy
build-job:
image: node:14
stage: build
script:
- npm install
- npm run build:prod
artifacts:
paths:
- deploy/build.zip
deploy-job:
image: docker.xx/xx/gitlab-templates/awscli
stage: deploy
script:
- aws s3 cp deploy/build.zip s3://mys3bucket
The build-job is successfully creating the artifacts. GitLab documentation says artifacts will be automatically downloaded and available in the next stage, however it does not specify where & how these artifacts will be available to consume in the next stage.
Question
In the deploy-job will the artifacts available at the same location? like deploy/build.zip
The artifacts should be available to the second job in the same location, where the first job saved them using the 'artifacts' directive.
I think this question already has an answer on the gitlab forum:
https://forum.gitlab.com/t/access-artifact-in-next-task-to-deploy/9295
Maybe you need to make sure the jobs run in the correct order using the dependencies directive, which is also mentioned in the forum discussion accesible via the link above.

download artifacts between pipelines in the same project Gitlab ci

I am currently running 2 jobs. One running some test unit which generate a coverage.xml file from php based project and another which launch a sonarqube analysis based on that coverage file.
Here is the gitlab-ci.yml :
stages:
- tests
- sonar
tests:
stage: "tests"
image: some-image
only:
- merge_requests
script:
- script.sh
artifacts:
paths:
- var/php/xml/coverage.xml
sonarqube-scanner:
stage: "sonar"
only:
- specific_branch
image:
name: sonarsource/sonar-scanner-cli:latest
cache:
key: ${CI_JOB_NAME}
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.php.coverage.reportPaths=#with_some_parameters
allow_failure: false
dependencies:
- tests
When i run those 2 jobs with the same only condition ( both with only condition set to a specific branch) the sonar job can retrieve the artefact without any problem
As soon as i put different only condition between those 2 jobs , which is only in merge requests for my unit test and only in a specific branch for my sonar scan. In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
Is there any way to pass an artefact from one job to another that have different only conditions ?
Thanks in advance
As long as the dependency's "only" clause will ALWAYS include the only clause of the dependent, it should run. In other words, something like tests run on all merge requests and sonar only runs on some merge requests.
Is there any way to pass an artefact from one job to another that have different only conditions?
Actually the conditions itself do not matter, as long as they both evaluate to letting the job run.
In the case where the merge request is not in the same branch as the branch specify in the sonar only conditions. the sonar job is not able to retrieve the artefact.
If it's not the branch as specified in the only condition, the sonarqube-scanner job won't run actually… Are you sure that the sonarqube-scanner job is really triggered?

Artifact not available in script

My pipeline has 4 stages
build - Should only happen on merge requests
test - Should only happen on merge requests
report - Should only happen on merge into master
release - Should only happen on merge into master
BUILD: During the build phase I build my test container and upload it to the container registry.
TEST: During the test phase I run the tests within the container, copy out the coverage report from the container and artifact the entire report directory.
REPORT: During the reporting stage I want to copy the artifact from my reporting stage into a Gitlab page directory so we can view the report.
RELEASE: Does terraform plan apply and building the production container.
Since my report and release stages are detached, I'm unable to upload the artifact that was created in a different stage. My work around is to upload the current cov report to /public/<commit-sha> and then move it to /public when it successfully merges with master. Might not be the best solution but I have limited knowledge on gitlab's pipelines.
The issue I'm having is pretty weird.
pages:
stage: report
dependencies:
- unittest
script:
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then mv public/$CI_COMMIT_SHA public/; else mv coverage/ public/$CI_COMMIT_SHA; fi
artifacts:
paths:
- public
expire_in: 30 days
This complains that mv: can't rename 'coverage/': No such file or directory
However this works perfectly fine
pages:
stage: report
dependencies:
- unittest
script:
- mv coverage/ public
artifacts:
paths:
- public
expire_in: 30 days
If there's an easier solution to pass artifacts between jobs that would be great, but I'm not sure if I'm missing something really obvious in my script.

Make a stage happen in gitlab-ci if one of two other stages completed

I have a pipeline that runs automatically when code is pushed to gitlab. There's a terraform apply step that I want to be able to run manually in one case (resources destroyed/recreated) and automatically in another (resources simply added or destroyed.) I almost got this with a manual step but can't see how to get the pipeline to be automatic in the safe case. The manual terraform apply step would not be the last in the pipeline.
Is it possible to say 'do step C if step A completed or step B completed'? Kind of branch the pipeline? Or could I do it with two pipelines, and failure in one triggers the other?
Current partial test code (gitlab CI yaml) here:
# stop with a warning if resources will be created and destroyed
check:
stage: check
script:
- ./terraformCheck.sh
allow_failure: true
# Apply changes manually, whether there is a warning or not
override:
stage: deploy
environment:
name: production
script:
- ./terraformApply.sh
dependencies:
- plan
when: manual
allow_failure: false
only:
- master
log:
stage: log
environment:
name: production
script:
- ./terraformLog.sh
when: always
only:
- master

How to control a stage play based on previous stage result without using artifacts?

We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.

Resources