I am trying to setup a coverage badge for a python project on GitLab.
I was following this question but it is still not working.
Currently I see in "CI/CD"/jobs page this:
But when I go to Settings/"CI-CD"/General pipelines, the coverage report is still unknown:
This is how I defined coverage run in .gitlab-ci.yml file:
tests:
stage: test
only:
- merge_requests
script:
- pip install poetry
- poetry install
- poetry run coverage run -m pytest
- poetry run coverage report
- poetry run coverage xml
artifacts:
paths: [coverage.xml]
Any ideas what might need to be set differently?
Okay it looks like that I need to add also -main to only part of my .gitlab-ci.yml and then it works. I was just maybe hoping I can have it without running tests twice (before MR to main and after).
Related
I am currently in the process of implementing a CI script for a node project. On of the steps involved in this process is to be able to get the project's version from the package.json file and setting it as an env variable (used subsequently for various operations).
What I have tried so far is creating the following job:
version:get-version:
stage: version
script: |
npm pkg get version
version=$(npm pkg get version)
echo "Current version: $version"
echo "PROJECT_VERSION=$version" >> .env
artifacts:
reports:
dotenv: .env
rules:
- if: $CI_COMMIT_BRANCH == "master"
- if: '$CI_COMMIT_BRANCH =~ /^feat.*/'
- if: $CI_COMMIT_TAG
The problem is that when I run the individual commands using the same docker image my CI is using (node:lts-alpine3.16) everything works fine. However when this job runs, it fails with the following error:
Created fresh repository.
18Checking out a3ac42fd as feat/SIS-540-More-CI-Changes...
19Skipping Git submodules setup
20
Executing "step_script" stage of the job script
00:00
21$ npm pkg get version # collapsed multi-line command
22
Uploading artifacts for failed job
00:01
23Uploading artifacts...
24WARNING: .env: no matching files. Ensure that the artifact path is relative to the working directory
25ERROR: No files to upload
26
Cleaning up project directory and file based variables
00:00
27ERROR: Job failed: command terminated with exit code 243
What is even more interesting is that, sometimes the same job would succeed with no problems (at least printing out the version using npm pkg get version). I am honestly stuck and I have no idea how to troubleshoot this or resolve it.
Any hints or ideas are more than welcome.
I see many instances of this question but nothing that helps me. Apologies if this question gets boring.
I am just starting out with node.js, Cypress and GitLab Pipelines.
I've cobbled together something that has a simple web app, a few simple tests.
It ran fine the first time but, on subsequent commits, it fails at the 'Cypress Tests' step with: The cypress npm package is installed, but the Cypress binary is missing.
There's a lot more to the log but I don't know what is relevant.
Here is my yml file
cypress tests:
stage: test
image: cypress/browsers:node14.17.0-chrome91-ff89
cache:
key: package-lock.json
paths:
- node_modules
before_script:
- npm install
- npm run dev &
- ./node_modules/.bin/wait-on http://localhost:3000
script:
- npm run cypress
only:
- merge_requests
- master
Could you please help with anything that looks like it might be the culprit?
Or at least help me understand how to read the situation better?
I tried reading the docs as much as I can, I just can't see the right way.
I'm also a beginner but I'm trying to answer. First you can add "cypress:open" : "cypress open" to the file package.json. For more details you can watch this video
I am trying to run cypress test scripts on Gitlab CICD Pipeline but this error occured
enter image description here
Here is my code on gitlab-ci.yml file
image: docker:18.09
stages:
- test
test:
stage: test
script:
- npm install
- npm run test
Cypress provide some custom docker image to use to avoid dependencies issues. You can check for them here cypress docker images
I also faced many weird issues with the implementation of a cypress job to run in CI. In order to not reinvent the wheel, you can use the cypress run job I shared in this opensource community hu for CI/CD jobs.It's customizable, you just need to include the job url in your pipeline and override some little varaiables, as mentioned in the related documentation of the job.
You should have something like that:
include:
- remote: 'https://api.r2devops.io/job/r/r2devops-bot/cypress_run/latest.yaml'
cypress_run:
variables:
BASE_URL: '<your_server_url>
A subset of pytest tests cannot run on gitlab due to dependencies on locally ran services.
How can I exclude them from gitlab CI pipelines while keeping them for local testing? I am not sure if the filtering needs to be done in pytest, tox or gitlab config.
Current configuration:
tox.ini
[testenv]
commands = pytest {posargs}
gitlab-ci.yml
build:
stage: build
script:
- tox
The most convenient way of doing that is dynamically through pytest
def test_function():
if not valid_config():
pytest.xfail("unsupported configuration")
https://docs.pytest.org/en/latest/skipping.html
You could also use two different tox.ini files.
While tox looks for a tox.ini by default, you can also pass in a separate tox.ini file like...
tox -c tox-ci.ini
There's some way to add a source code linting step to a gitlab ci enabled project?
So as the stage fails if lint detects critical issues?
Of course it's possible.
Simply add a job that will run the linter of your choice. As long as your linter returns a non zero code when it finds an error, that will work.
Here's an example to add to your .gitlab-ci.yml file:
lint:
stage: test
script:
- linter src/