I would like to know how to generate artifacts for failed builds in gitlab continuous integration, to view the html report generated by the build.
I tried like this:
artifacts:
when: on_failure
paths:
- SmokeTestResults/
- package.json
but it does not work unfortunately. I am using Gitlab 8.11.4 community edition.
Using when: on_failure will upload the artifact only when there is a failure.
To always upload the artifact despite a failure, use when: always.
https://docs.gitlab.com/ce/ci/yaml/index.html#artifactswhen
When, path, and the files should all be at the same level
artifacts:
when: on_failure
paths:
- SmokeTestResults/
- package.json
Related
I can run script for build my project in gitlab-ci.yaml config.Is there a way let pipline download the file output by build script from browser, then i can find it in my computer?
You could create a job artifact from the output of commands from your pipeline (like your build script), assuming you have redirected said output in a file.
You can then download job artifacts by using the GitLab UI or the API.
Using the GitLab CLI glab:
glab ci artifact <refName> <jobName> [flags]
# example
glab ci artifact main build
This is my ci config, to download build-files on Gitlab UI via artifact field.
stages:
- build
.build-base:
stage: build
script:
- cd dev/${APP_PATH}
- yarn ${BUILD_SCRIPT}
when: manual
after_script:
- mv dev/${APP_PATH}/dist ./${APP_OUTPUT_NAME}
artifacts:
paths:
- ./${APP_OUTPUT_NAME}
expire_in: 2 week
name: ${APP_OUTPUT_NAME}_${CI_JOB_ID}
tags:
- kube-runner
order_workbench_sit_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build:sit'
APP_OUTPUT_NAME: 'order_workbench_sit'
order_workbench_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build'
APP_OUTPUT_NAME: 'order_workbench'
I have a .gitlab-ci.yml file which I want to use to run a script for merge request validation. The same script should be used in CI, but only there the result should be published to gitlab pages. Also, only for the CI, the result should be cached.
This is a simplified version of the current .gitlab-ci.yml:
pages:
stage: deploy
script:
- mkdir public/
- touch public/file.txt
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(The real-world code is in the fdroid-firefox gitlab repo.)
There are 2 ways how the pipeline is being triggered. Depending on this, I do or do not want to publish to pages:
by merge request validation. In this case, I want to execute the script part, but I don't want to publish or cache the result (otherwise, anyone with permissions to create a merge request could overwrite the gitlab pages content).
by CI (which is triggered both after check-in to master branch and following a schedule). In this case, I want the result to be cached and the gitlab pages to be updated.
I already tried splitting up the stages:
stages:
- build
- deploy
build_repo:
stage: build
script:
- mkdir public/
- touch public/file.txt
pages:
stage: deploy
script: echo "publish to Gitlab pages"
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(Original .gitlab-ci.yml file)
But by doing this, the pages:deploy stage faled because it does not have access to the result of the build stage. The pages:deploy stage shows an error symbol and on the tooltip it says missing pages artifacts. (real world log).
The log says:
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
What am I doing wrong that I don't have access to the result of the build stage?
How can I run the script section in both cases but still deploy to pages only from master branch?
You don't save your public path artifacts in your build job. And that's why they are missing at next deploy stage pages job.
You have this:
build_repo:
stage: build
script:
- your script
Try to save artifacts in your build job like this:
build_repo:
stage: build
script:
- your script
artifacts:
when: always
paths:
- public
So they will be passed to the next stage deploy and pages job could see them.
I’d like to use the artifacts created by the Security/SAST.gitlab-ci.yml template in my final pipeline stage (reporting).
How can I modify the Security/SAST.gitlab-ci.yml template to store the artifacts somewhere in my project dir? I tried to define the following for this template, but this is not working:
artifacts:
paths:
- binaries/
I’d be happy for every kind of support.
Thank you
Solution
Your parameters need to be updated. Since SAST.gitlab-ci.yml cannot be updated directly, you need to either override one of the blocks from your gitlab-ci.yml which includes the file, or define and include your custom SAST.gitlab-ci.yml. It seems like you can get away with simply overriding the sast block. Specifically, override the artifacts -> reports -> sast parameter.
Example
sast:
stage: test
artifacts:
reports:
sast: gl-sast-report.json
You also need to ensure the stages and build step is something resembling
stages:
- build
- test
include:
- template: Security/SAST.gitlab-ci.yml
build:
stage: build
script:
- ...
artifacts:
paths:
- binaries/
References
Gitlab SAST: https://docs.gitlab.com/ee/user/application_security/sast/
In my repo only source files are checked in — the code is tested and the dist files are generated in a pipeline. I then want to be able to tag a specific version and attach the artifacts generated by this pipeline to it. Ideally this should all happen with as little manual intervention as possible.
What is the best way to reference pipeline artifacts from a release?
You can use the release-cli in a stage after the job you built your app, to upload to a release a file form a previous job you'll need that build job id that you can store in a file in the artifacts:
build:
stage: build
script:
- echo "Build your app"
- echo "${CI_JOB_ID}" > CI_JOB_ID.txt # This way you know the job id in the next stage
artifacts:
paths:
- your_app.exe
- CI_JOB_ID.txt
expire_in: never
rules:
- if: $CI_COMMIT_TAG
release:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
script:
- |
release-cli create --name "Release $CI_COMMIT_TAG" --tag-name $CI_COMMIT_TAG \
--assets-link "{\"name\":\"Executable file\",\"url\":\"https://gitlab.com/some/repo/-/jobs/`cat CI_JOB_ID.txt`/artifacts/file/your_app.exe\"}"
rules:
- if: $CI_COMMIT_TAG
This way every time you tag your repo, it will create a release if the pipeline succeed.
My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual