GitLab CI Build not uploading artifacts of codeception - gitlab

I have problems with GitLab not uploading the artifacts generated by codeception when a test fails. It only uploads the .gitignore in the _output folder.
This is the relevant part from my .gitlab-ci.yml:
- ./src/Vendor/codeception/codeception/codecept run acceptance || true
- ls -a tests/_output
artifacts:
paths:
- "tests/_output"
expire_in: 20 days
when: always
Interesting is, that I can browse the artifacts (in this case only the .gitignore-file) before the job even finished. The logs of my runner prove, that the artifacts do indeed exist in the directory tests/_output (shorted):
$ ls -a tests/_output
.
..
.gitignore
commentsCest.answerCommentTest.fail.html
commentsCest.answerCommentTest.fail.png
commentsCest.normalCommentTest.fail.html
commentsCest.normalCommentTest.fail.png
failed
Uploading artifacts...
tests/_output: found 2 matching files
Uploading artifacts to coordinator... ok id=123456789 responseStatus=201 Created token=abcdefghij
Job succeeded
What am I doing wrong?

I figured out a workaround:
The gitlab-runner only uploads files properly inside the project directory.
To get the artifacts, copy all files to ${CI_PROJECT_DIR}:
codeception_tests:
stage: <your stage-name>
image: <your image>
script:
- ...
after_script:
- mkdir ${CI_PROJECT_DIR}/artifacts
- mkdir ${CI_PROJECT_DIR}/artifacts/codecept
- cp tests/_output ${CI_PROJECT_DIR}/artifacts/codecept -R
artifacts:
paths:
- ${CI_PROJECT_DIR}/artifacts/
expire_in: 5 days
when: always

Related

How to move files from one to second directories on branch during Pipeline run? Gitlab

I have created Docs folder and it should contain report files.
I'm trying to move content from temporary created Allure folder to Docs folder and then copy everything from Docs to public folder to get access to Pages on which that Allure report will be located. I'm doing that process insted of simple copying files from allure folder to public folder to get history about previous runs. Maybe there is some better way to do it ? I'd like to store old reports for some X time (for example for 2 days, to be able to see on ALlure what was wrong is there are some problems) and then delete old ones, not deleting latest which have not reached "deleting point". So, here is my yml file:
stages:
- testing
- deploy
docker_job:
stage: testing
tags:
- docker
image: atools/chrome-headless:java11-node14-latest
before_script:
- npm ci
- npx playwright install
- npm install allure-commandline --save-dev
script: #||true
- npx playwright test
after_script:
- npx allure generate allure-results
rules:
- when: always
allow_failure: true
artifacts:
when: always
paths:
- ./allure-report
expire_in: 1 day
pages:
stage: deploy
script:
- mkdir public
- mv ./allure-report/* Docs
- cp -R ./Docs/* public
artifacts:
paths:
- public
rules:
- when: always
Everything is going good but it doesn't work - mv ./allure-report/* Docs
- cp -R ./Docs/* public are doing nothing or I just can't see any effect. Help me please to correctly solve that problem.
Maybe there is obvious holes in logic, idk, have tried a lot of variants but they all don't work.
Can it be done by my way at all?
okay, so I have done it with "logging in" with my git.conf email/name, then using auth token I did a push of these artifacts to Docs folder, looks like:
stages:
- testing
- deploy
docker_job:
stage: testing
tags:
- docker
image: atools/chrome-headless:java11-node14-latest
before_script:
- npm ci
- npx playwright install
- npm install allure-commandline --save-dev
script: #||true
- npx playwright test
after_script:
- npx allure generate allure-results
rules:
- when: always
allow_failure: true
artifacts:
when: always
paths:
- ./allure-report
expire_in: 15 mins
pages:
stage: deploy
script:
- cp -r -u ./allure-report/* Docs
- cp -R ./Docs/* public
- git config --global user.email "mail"
- git config --global user.name "name"
- git remote set-url origin https://gitlab-ci-token:${token}#gitlab.com/proj_link
- git checkout main
- git add Docs
- git commit -m "assets"
- git push
artifacts:
paths:
- public
rules:
- when: always

Download artifacts No artifacts found

Uploading artifacts for successful job
00:00
Uploading artifacts...
WARNING: sitespeed-results/: no matching files
ERROR: No files to upload
Uploading artifacts...
WARNING: browser-performance.json: no matching files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
Job succeeded
gitlab-ci.yaml example :
include:
template: Verify/Browser-Performance.gitlab-ci.yml
browser_performance:
variables:
URL: https://example.com
DEGRADATION_THRESHOLD: 5
This error usually happens when there are no matching files found. Either move the files to the main directory, or specify the correct path to the files and folders. No matter where your script ends, gilab runner always searches for these files in the main/root directory of the project.
create a new folder inside $BITBUCKET_CLONE_DIR then copy your desired artifacts in that new folder?
After that, use the new folder as your artifacts.
See sample step below:
- step:
name: XXXX
image:
name: YYYY
username: $xxxy
password: $yyyy
script:
- mvn clean install --file $BITBUCKET_CLONE_DIR/xxxx/pom.xml
- mkdir –p $BITBUCKET_CLONE_DIR/dist
- cp -R xxx/target/* $BITBUCKET_CLONE_DIR/dist
- ls -lah $BITBUCKET_CLONE_DIR/dist
artifacts:
- dist/**
I hope this works
Check the path you have provided in the artifacts section:
artifacts:
paths:
- $CI_PROJECT_DIR
expire_in: 1 week
If you are choosing files only with some specific extensions like below, then GitLab Runner will not be able to find files with any other extensions:
artifacts:
paths:
- $CI_PROJECT_DIR/*.xlsx
expire_in: 1 week
So be careful with what are you mentioning in the "paths"

Gitlab CI SAST access to gl-sast-report.json artifact in subsequent stage

I am wanting to use the gl-sast-report.json file created during the SAST process in a subsequent stage of my CI but it is not found.
ci.yml
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- .post
sast:
rules:
- if: $CI_COMMIT_TAG
send-reports:
stage: .post
dependencies:
- sast
script:
- ls
- echo "in post stage"
- cat gl-sast-report.json
output:
Running with gitlab-runner 13.2.1 (efa30e33)
on blah blah blah
Preparing the "docker" executor
00:01
.
.
.
Preparing environment
00:01
Running on runner-zqk9bcef-project-4296-concurrent-0 via ff93ba7b6ee2...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in blah blah
Checking out 9c2edf67 as 39-test-dso...
Removing gl-sast-report.json
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ ls
<stuff in the repo>
$ echo "in .post stage"
in post stage
$ cat gl-sast-report.json
cat: can't open 'gl-sast-report.json': No such file or directory
ERROR: Job failed: exit code 1
You can see the line Removing gl-sast-report.json which I assume is the issue.
I don't see that anywhere in the SAST.gitlab-ci.yml at https://gitlab.com/gitlab-org/gitlab/-/blob/v11.11.0-rc2-ee/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml#L33-45
Any ideas on how to use this artifact in the next stage of my CI pipeline?
UPDATE:
So I tried out k33g_org's suggestion below but to no avail. Seems that this is due to limitations in the sast template specifically. Did the following test.
include:
- template: Security/SAST.gitlab-ci.yml
stages:
- test
- upload
something:
stage: test
script:
- echo "in something"
- echo "this is something" > something.txt
artifacts:
paths: [something.txt]
sast:
before_script:
- echo "hello from before sast"
- echo "this is in the file" > test.txt
artifacts:
reports:
sast: gl-sast-report.json
paths: [gl-sast-report.json, test.txt]
send-reports:
stage: upload
dependencies:
- sast
- something
before_script:
- echo "This is the send-reports before_script"
script:
- echo "in send-reports job"
- ls
artifacts:
reports:
sast: gl-sast-report.json
Three changes:
Updated code with k33g_org's suggestion
Created another artifact in the sast job (to see if it would pass through to send-reports job)
Created a new job (something) where I created a new something.txt artifact (to see if it would pass through to send-reports job)
Output:
Preparing environment
00:01
Running on runner-zqx7qoq-project-4296-concurrent-0 via e3fe672984b4...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /<repo>
Checking out 26501c44 as <branch_name>...
Removing something.txt
Skipping Git submodules setup
Downloading artifacts
00:00
Downloading artifacts for something (64950)...
Downloading artifacts from coordinator... ok id=64950
responseStatus=200 OK token=zoJwysdq
Executing "step_script" stage of the job script
00:01
$ echo "This is the send-reports before_script"
This is the send-reports before_script
$ echo "in send-reports job"
in send-reports job
$ ls
...<other stuff in repo>
something.txt
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: gl-sast-report.json: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Notes:
something.txt made it to this job
all artifacts from the sast job to not make it to subsequent jobs
I can only conclude that there is something internal to the sast template that is not allowing artifacts to propagate to subsequent jobs.
in the first job (sast) add this:
artifacts:
paths: [gl-sast-report.json]
reports:
sast: gl-sast-report.json
and in the next job (send-reports) add this
artifacts:
reports:
sast: gl-sast-report.json
Then you should be able to access the report in the next job (send-reports)
Instead of referencing the gl-sast-report.json artifact as sast report, reference it as a regular artifact.
So what you should do is declare the artifact this way
artifacts:
paths:
- 'gl-sast-report.json'
instead of
reports:
sast: gl-sast-report.json
I spent a full day banging my head against this, trying to access the gl-sast-report.json file generated by the built-in IaC scanner. Here's what ultimately worked for me:
First and foremost, DO NOT use this code suggested by GitLab's documentation:
include:
- template: Security/SAST-IaC.latest.gitlab-ci.yml
The above code works fine if all you want to do is scan for IaC vulnerabilities and download the report from the GitLab UI later. But who wants to do that?! I want to access the report in my next job and fail the pipeline if there are medium+ vulnerabilities in the report!
If that's what you want to do, you'll need to add all of the code from the official GitLab IaC scanner template to your pipeline, and then make some modifications. You can find the latest template code here, or use my example below.
Modified template:
# Read more about this feature here: https://docs.gitlab.com/ee/user/application_security/iac_scanning/
#
# Configure SAST with CI/CD variables (https://docs.gitlab.com/ee/ci/variables/index.html).
# List of available variables: https://docs.gitlab.com/ee/user/application_security/iac_scanning/index.html
variables:
# Setting this variable will affect all Security templates
# (SAST, Dependency Scanning, ...)
TEMPLATE_REGISTRY_HOST: 'registry.gitlab.com'
SECURE_ANALYZERS_PREFIX: "$TEMPLATE_REGISTRY_HOST/security-products"
SAST_IMAGE_SUFFIX: ""
SAST_EXCLUDED_PATHS: "spec, test, tests, tmp"
iac-sast:
stage: test
artifacts:
name: sast
paths:
- gl-sast-report.json
#reports:
# sast: gl-sast-report.json
when: always
rules:
- when: never
# `rules` must be overridden explicitly by each child job
# see https://gitlab.com/gitlab-org/gitlab/-/issues/218444
variables:
SEARCH_MAX_DEPTH: 4
allow_failure: true
script:
- /analyzer run
kics-iac-sast:
extends: iac-sast
image:
name: "$SAST_ANALYZER_IMAGE"
variables:
SAST_ANALYZER_IMAGE_TAG: 3
SAST_ANALYZER_IMAGE: "$SECURE_ANALYZERS_PREFIX/kics:$SAST_ANALYZER_IMAGE_TAG$SAST_IMAGE_SUFFIX"
rules:
- if: $SAST_DISABLED
when: never
- if: $SAST_EXCLUDED_ANALYZERS =~ /kics/
when: never
- if: $CI_COMMIT_BRANCH
Enforce Compliance:
stage: Compliance
before_script:
- apk add jq
script:
- jq -r '.vulnerabilities[] | select(.severity == "Critical") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json > results.txt
- jq -r '.vulnerabilities[] | select(.severity == "High") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- jq -r '.vulnerabilities[] | select(.severity == "Medium") | (.severity, .message, .location, .identifiers[].url)' gl-sast-report.json >> results.txt
- chmod u+x check-sast-results.sh
- ./check-sast-results.sh
You'll also need to make sure to add two stages to your pipeline (if you don't have them already):
stages:
# add these to whatever other stages you already have
- test
- Compliance
Note: it's extremely important that your job which is trying to access gl-sast-report.json ("Compliance" in this case) is not in the same stage as the sast scans themselves ("test" in this case). If they are, then your job will try to access the report before it exists and fail.
I'll include my shell script referenced in the pipeline in case you want to use that too:
#!/bin/sh
if [ -s results.txt ]; then
echo ""
echo ""
cat results.txt
echo ""
echo "ERROR: SAST SCAN FOUND VULNERABILITIES - FIX ALL VULNERABILITIES TO CONTINUE"
echo ""
exit 1
fi
This is a basic script that checks to see if the "results.txt" file has any contents. If it does, it exits with code 1 to break the pipeline and print the vulnerabilities. If there are no contents in the file, the script exits with code 0 and the pipeline continues (allowing you to deploy your infra). Save the file above as "check-sast-results.sh" in the root directory of your GitLab repository (the same level where ".gitlab-ci.yml" resides).
Hope this helps someone out there.
I've found this issue too, also impacts some of the other scanners; I raised an issue with GitLab to fix:
https://gitlab.com/gitlab-org/gitlab/-/issues/345696

Gitlab Pages throw 404 when accessed

I have a group project with the following name (hosted in Gitlab): gitlab.com/my-group/my-project.
I have generated coverage reports during testing and saved them as artifacts using Gitlab CI. Here is Gitlab CI config:
test:
stage: test
image: node:11
before_script:
- npm install -g yarn
- yarn
cache:
paths:
- node_modules/
script:
- yarn lint
- yarn test --all --coverage src/
except:
- tags
artifacts:
paths:
- coverage/
coverage: '/Statements\s+\:\s+(\d+\.\d+)%/'
deploy-pages:
stage: deploy
dependencies:
- test
script:
- mv coverage/ public/
artifacts:
paths:
- public/
expire_in: 30 days
except:
- tags
When I open deploy stage job, I can see the artifact being created. Here is the screenshot: . All the files are under /public directory in the artifact.
Now, when I go to: https://my-group.gitlab.io/my-project, I keep getting 404.
I am not sure what step I am missing here. Can someone shed some light on this issue for me?
Thanks!
There are three basic requirements for the project itself:
project must be named group.gitlab.io (if you want it to be the base domain)
job must create artifact in public directory
job must be called pages
Most likely it's the last one that needs fixing since your job is currently called deploy-pages. Simply rename that to pages.
You'll know when you got everything working because under Settings > Pages, it will tell you the link where it's published to.

How to delete artifacts directory on gitlab runner after uploading them to gitlab?

I'm trying to create a gitlab job that shows a metric for test code coverage. To do that, I'm creating a .coverage file and placing it in a directory that uploads artifacts. In a subsequent stage the artifacts are downloaded and consumed by a coverage tool to produce a coverage report. I noticed that the artifacts are not deleted when the gitlab runner finishes the job and are bloating my filesystem. How can I remove the artifacts directory after the artifacts are uploaded?
Here's what we currently have
stages:
- test
- build
before_script:
- export GITLAB_ARTIFACT_DIR="$(pwd)"/artifacts
[...]
some-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
some-other-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
[...]
coverage:
stage: build
before_script:
script:
- [our coverage script]
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
expire_in: 4 days
paths:
- artifacts/
when: always
[...]
after_script:
- sudo rm -rf "${GITLAB_ARTIFACT_DIR}"
According to https://gitlab.com/gitlab-org/gitlab-runner/issues/4146 after_script does not have access to before_script or scripts environment variables.
A solution could be to use cache and artifact simultaneously.
This config will create a new directory depending of the job id ($CI_JOB_ID) for each job execution :
stages:
- test
remote:
stage: test
script :
- mkdir cache-$CI_JOB_ID
- echo hello> cache-$CI_JOB_ID/foo.txt
cache:
key: build-cache
paths:
- cache-$CI_JOB_ID/
artifacts:
paths:
- cache-$CI_JOB_ID/foo.txt
expire_in: 1 week
At the next run, the previous cache-$CI_JOB_ID will be removed and replace by a new directory (as the $CI_JOB_ID will be different). This will keep only one instance of your cached file until the next job execution.
Note : you need to prefix the directory name with cache- otherwise the .gitlab-ci.yml is invalid.

Resources