How to access artifacts in next stage in GitLab CI/CD - gitlab

I am trying to build GitLab CI/CD for the first time. I have two stages build and deploy The job in the build stage produce artifacts. And then the job in deploy stage wants to upload those artifacts to AWS S3. Both the jobs are using same runner but different docker image.
default:
tags:
- dev-runner
stages:
- build
- deploy
build-job:
image: node:14
stage: build
script:
- npm install
- npm run build:prod
artifacts:
paths:
- deploy/build.zip
deploy-job:
image: docker.xx/xx/gitlab-templates/awscli
stage: deploy
script:
- aws s3 cp deploy/build.zip s3://mys3bucket
The build-job is successfully creating the artifacts. GitLab documentation says artifacts will be automatically downloaded and available in the next stage, however it does not specify where & how these artifacts will be available to consume in the next stage.
Question
In the deploy-job will the artifacts available at the same location? like deploy/build.zip

The artifacts should be available to the second job in the same location, where the first job saved them using the 'artifacts' directive.
I think this question already has an answer on the gitlab forum:
https://forum.gitlab.com/t/access-artifact-in-next-task-to-deploy/9295
Maybe you need to make sure the jobs run in the correct order using the dependencies directive, which is also mentioned in the forum discussion accesible via the link above.

Related

Is there a way to download file(output by ci job) from browser in gitlab ci?

I can run script for build my project in gitlab-ci.yaml config.Is there a way let pipline download the file output by build script from browser, then i can find it in my computer?
You could create a job artifact from the output of commands from your pipeline (like your build script), assuming you have redirected said output in a file.
You can then download job artifacts by using the GitLab UI or the API.
Using the GitLab CLI glab:
glab ci artifact <refName> <jobName> [flags]
# example
glab ci artifact main build
This is my ci config, to download build-files on Gitlab UI via artifact field.
stages:
- build
.build-base:
stage: build
script:
- cd dev/${APP_PATH}
- yarn ${BUILD_SCRIPT}
when: manual
after_script:
- mv dev/${APP_PATH}/dist ./${APP_OUTPUT_NAME}
artifacts:
paths:
- ./${APP_OUTPUT_NAME}
expire_in: 2 week
name: ${APP_OUTPUT_NAME}_${CI_JOB_ID}
tags:
- kube-runner
order_workbench_sit_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build:sit'
APP_OUTPUT_NAME: 'order_workbench_sit'
order_workbench_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build'
APP_OUTPUT_NAME: 'order_workbench'

Changing Gitlab SAST json report names

Issue
Note: My CI contains a code complexity checker which can be ignored. This question is mainly focused on SAST.
I have recently setup a SAST pipeline for one of my Gitlab projects. The Gitlab-ce and Gitlab-runner instances are self-hosted. When the SAST scan is completed, the downloaded artifacts / json reports all contain the same name gl-sast-report.json. In this example, the artifacts bandit-sast and semgrep-sast both product gl-sast-report.json when downloaded.
SAST configuration
stages:
- CodeScan
- CodeComplexity
sast:
stage: CodeScan
tags:
- sast
code_quality:
stage: CodeComplexity
artifacts:
paths: [gl-code-quality-report.json]
services:
tags:
- cq-sans-dind
include:
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
Completed SAST results
End Goal
If possible, how could I change the name of the artifacts for bandit-sast and semgrep-sast?
If question one is possible, does this mean I have to manually specify each analyser for various projects. Currently, based on my .gitlab-ci.yml the SAST analysers are automatically detected based on the project language.
If you're using the pre-built SAST images, this isn't possible, even if you run the docker command manually like so:
docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
When using these SAST (and DAST) images, the report file will always have the name in the docs, however if you ran the docker command manually like above, you could rename the file before it's uploaded as an artifact, but it would still have the same json structure/content.
Run License Scanning Analyzer:
stage: sast
script:
- docker run --volume "$PWD":/code --env=LM_REPORT_VERSION="2.1" --env=CI_PROJECT_DIR=/code registry.gitlab.com/gitlab-org/security-products/analyzers/license-finder:latest
- mv gl-license-scanning-report.json license-scanning-report.json
artifacts:
reports:
license_scanning: license-scanning-report.json
The only way to change the json structure/content is to implement the SAST tests manually without using the provided images at all. You can see all the available SAST analyzers in this Gitlab repo.
For the License Finder analyzer as an example, the Dockerfile says the entrypoint for the image is the run.sh script.
You can see on line 20 of run.sh it sets the name of the file to 'gl-license-scanning-report.json', but we can change the name by running the docker image manually so this doesn't really help. However, we can see that the actual analyzing comes from the scan_project function, which you could replicate.
So while it is possible to manually run these analyzers without the pre-built images, it will be much more difficult to get them to work.

Gitlab CI: build for CI and Merge request but publish only CI to pages

I have a .gitlab-ci.yml file which I want to use to run a script for merge request validation. The same script should be used in CI, but only there the result should be published to gitlab pages. Also, only for the CI, the result should be cached.
This is a simplified version of the current .gitlab-ci.yml:
pages:
stage: deploy
script:
- mkdir public/
- touch public/file.txt
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(The real-world code is in the fdroid-firefox gitlab repo.)
There are 2 ways how the pipeline is being triggered. Depending on this, I do or do not want to publish to pages:
by merge request validation. In this case, I want to execute the script part, but I don't want to publish or cache the result (otherwise, anyone with permissions to create a merge request could overwrite the gitlab pages content).
by CI (which is triggered both after check-in to master branch and following a schedule). In this case, I want the result to be cached and the gitlab pages to be updated.
I already tried splitting up the stages:
stages:
- build
- deploy
build_repo:
stage: build
script:
- mkdir public/
- touch public/file.txt
pages:
stage: deploy
script: echo "publish to Gitlab pages"
artifacts:
paths:
- public
only:
- master
cache:
paths:
- fdroid
(Original .gitlab-ci.yml file)
But by doing this, the pages:deploy stage faled because it does not have access to the result of the build stage. The pages:deploy stage shows an error symbol and on the tooltip it says missing pages artifacts. (real world log).
The log says:
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
What am I doing wrong that I don't have access to the result of the build stage?
How can I run the script section in both cases but still deploy to pages only from master branch?
You don't save your public path artifacts in your build job. And that's why they are missing at next deploy stage pages job.
You have this:
build_repo:
stage: build
script:
- your script
Try to save artifacts in your build job like this:
build_repo:
stage: build
script:
- your script
artifacts:
when: always
paths:
- public
So they will be passed to the next stage deploy and pages job could see them.

Depoying a certain build with gitlab

My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual

Is there a way to upload GitLab CI artifacts to an Openshift container?

I have a GitLab CI pipeline which builds a few artifacts. For example:
train:job:
stage: train
script: python script.py
artifacts:
paths:
- artifact.csv
expire_in: 1 week
Now I deploy the repository to OpenShift using the following step in my GitLab pipeline. This will pull my GitLab repo inside OpenShift. It does not include the artifacts from the 'testing'.
deploy:app:
stage: deploy
image: ayufan/openshift-cli
before_script:
- oc login $OPENSHIFT_DOMAIN --token=$OPENSHIFT_TOKEN
script:
- oc start-build my_app
How can I let OpenShift use this repository, plus the artifacts created in my pipeline?
In general OpenShift build pipelines rely on the s2i build process to build applications.
The best practice for reusing artifacts between s2i builds would either be through using incremental builds or chaining multiple BuildConfig definitions (the output image of one BuildConfig being fed as source image into another BuildConfig) together via the spec.source.images or spec.source.git configuration in the BuildConfig definition.
In your case since you are using a Jenkins pipeline to generate your artifacts instead of the OpenShift build process you really only need to combine your artifacts with your source code and the runtime container image.
To do this you might create a builder container image that pulls those artifacts down from an external source during the assemble phase (via curl, wget, etc) of the s2i workflow. You could then configure your BuildConfig to point at your source repository. At build time the BuildConfig will pull down your source code and the assemble script will pull down your artifacts.

Resources