I'm new to Gitlab, trying to setup coverage report -m for Gitlab. When I run manually, coverage report -m gives me the report. Just cant figure out what needs to be done to get that display on Gitlab.
This needs to run with Python 3.6 unit test code coverage on Linux for Gitlab.
Here is my yml file
stages:
- build
- test
- coverage
- deploy
before_script:
- python --version
- pip install -r requirements.txt
unit-tests:
image:
name: "python:3.6"
entrypoint: [""]
stage: test
script: python3 -m unittest discover
test:
image:
name: "python:3.6"
stage: test
script:
- PYTHONPATH=$(pwd) python3 my_Project_Lib/my_test_scripts/runner.py
coverage:
stage: test
script:
#- docker pull $CONTAINER_TEST_IMAGE
- python3 -m unittest discover
#- docker run $CONTAINER_TEST_IMAGE /bin/bash -c "python -m coverage run tests/tests.py && python -m coverage report -m"
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
This runs my unit tests and runer.pl fine, but no test coverage data.
Below is a working solution for Unit Test Code coverage.
Here is my .yml-file
stages:
- build
- test
- coverage
- deploy
before_script:
- pip install -r requirements.txt
test:
image:
name: "python:3.6"
stage: test
script:
- python my_Project_Lib/my_test_scripts/runner.py
unit-tests:
stage: test
script:
- python -m unittest discover
- coverage report -m
- coverage-badge
coverage: '/TOTAL.+ ([0-9]{1,3}%)/'
This runs my unit tests and runner.py fine, also runs coverage. You will need following in requirements.txt
coverage
coverage-badge
Also this line in README.MD
[![coverage report](https://gitlab.your_link.com/your_user_name/your directory/badges/master/coverage.svg)](https://gitlab.your_link.com/your_user_name/your directory/commits/master)
Your user name and link can be copied from web address.
Related
I am trying to use the extends keyword in the .gitlab-ci.yml of a GitLab Python project. It's not working, and I can't figure out why not.
I am using GitLab's CI/CD framework to test my Python project. The project has unit tests written with pytest and the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3.9
WORKDIR /install
COPY . .
RUN pip install --no-cache-dir --upgrade .
EXPOSE 8000
CMD ["uvicorn", "sample_api.api:app"]
When I have the following .gitlab-ci.yml, GitLab's CI/CD system starts the python:3.9.16-slim-buster image and successfully runs the test job.
include:
- template: Auto-DevOps.gitlab-ci.yml
test:
stage: test
image: python:3.9.16-slim-buster
before_script:
- pip install .
script:
- pytest tests/unit
services: []
However, the test job fails when I change it to use the extends keyword like so.
include:
- template: Auto-DevOps.gitlab-ci.yml
.tests:
stage: test
image: python:3.9.16-slim-buster
before_script:
- pip install .
services: []
test:
extends: .tests
script:
- pytest tests/unit
The log of the failed test job looks like this.
...
Preparing the "docker" executor
00:11
Using Docker executor with image gliderlabs/herokuish:latest ...
...
Executing "step_script" stage of the job script
00:03
Using docker image sha256:686c154e24a2373406bdf9c8f44904b5dbe5cd36060c61d3da137086389d18d3 for gliderlabs/herokuish:latest with digest gliderlabs/herokuish#sha256:5d5914135908a234c20eec80daaa6a386bfa74293310bc0c79148fe7a7e4a926 ...
$ pip install .
/bin/bash: line 154: pip: command not found
Cleaning up project directory and file based variables
00:02
ERROR: Job failed: exit code 1
It is failing because the default herokuish:latest image is being used instead of python:3.9.16-slim-buster. It appears that the .tests section is never used.
I assume there's something wrong with my syntax, but it seems so simple and I can't figure out what it is.
I'm trying to setup a simple CI/CD environment on gitlab. My code is a python app that needs an external service for testing. The service is a container that does not require any script to be run. My gitlab-ci.yml file is:
stages:
- dynamodb
- testing
build_dynamo:
stage: dynamodb
image: amazon/dynamodb-local:latest
unit_tests:
stage: testing
image: python:3.10.3
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
For this config I get an error
Found errors in your .gitlab-ci.yml: jobs build_dynamo config should
implement a script: or a trigger: keyword
Hiow can I solve this or implement the setup I need?
Using service solved this
unit_tests:
image: python:3.10.3-slim-buster
services:
- name: amazon/dynamodb-local:latest
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
endpoint for the service is amazon-dynamo-local:8000 since "/" are changed to "-".
Reference: https://docs.gitlab.com/ee/ci/services/#accessing-the-services
I had created a specific runner for my gitlab project,
its taking too long to run the pipeline.
Its is getting stuck in Cypress test mainly.
After "All Specs passed" it will not move forward.
- build
- test
build:
stage: build
image: gradle:jdk11
script:
- gradle --no-daemon build
artifacts:
paths:
- build/distributions
expire_in: 1 day
when: always
junit-test:
stage: test
image: gradle:jdk11
dependencies: []
script:
- gradle test
timeout: 5m
cypress-test:
stage: test
image: registry.gitlab.com/sahajsoft/gurukul2022/csv-parser-srijan:latestSrigin2
dependencies:
- build
script:
- unzip -q build/distributions/csv-parser-srijan-1.0-SNAPSHOT.zip -d build/distributions
- sh build/distributions/csv-parser-srijan-1.0-SNAPSHOT/bin/csv-parser-srijan &
- npm install --save-dev cypress-file-upload
- npx cypress run --browser chrome
A recommended approach is to try and replicate your script (from your pipeline) locally, from your computer.
It will allow to check:
how long those commands take
if there is any interactive step, where a command might expect a user entry, and wait for stdin.
The second point would explain why, in an unattended environment like a pipeline one, "it will not move forward".
I am using a multi-project pipeline to separate my end to end tests from my main application code. My end of end tests, if I run the full suite, can take a significant amount of time. I’ve nicely broken them into various groupings using pytest and it’s mark feature.
I’d like to be able to run specific scenarios from within the pipeline now by setting each of these different scenarios to when: manual . Unfortunately, as soon as I add this, the child pipeline reports it has failed to the parent and progress stops. I can manually run each section, as expected, but even then success is not reported back to the parent pipeline.
This is an example of the pipeline. The Integration Tests step has reported a failure, and I’ve manually run the Fast Tests from the downstream pipeline. It has passed, and as the only job in the pipeline the entire downstream pipeline passes. Yet the parent still reports failure so Deploy doesn’t get run.
If I remove when: manual from the downstream pipeline, Integration Tests will run the full test suite, pass, and Deploy will move on as expected.
This is the parent project pipeline.
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Static Analysis
- Local Tests
- Integration Tests
- Deploy
mypy:
stage: Static Analysis
script:
- mypy .
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=88
pytest-smoke:
stage: Local Tests
script:
- pytest -m smoke
pytest-unit:
stage: Local Tests
script:
- pytest -m unittest
pytest-slow:
stage: Local Tests
script:
- pytest -m slow
pytest-fast:
stage: Local Tests
script:
- pytest -m fast
int-tests:
stage: Integration Tests
trigger:
project: andy/gitlab-integration-testing-integration-tests
strategy: depend
deploy:
stage: Deploy
when: manual
script:
- echo "Deployed!"
The end to end tests pipeline looks like this:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
- export PYTHONPATH=${PYTHONPATH}:./src
- python -c "import sys;print(sys.path)"
stages:
- Fast Tests
pytest-smoke:
stage: Fast Tests
when: manual
script:
- pytest -m smoke
How can I selectively (manually) run downstream jobs and report success back to the parent pipeline? Without when: manual in that last step on the end to end (downstream) pipeline it performs exactly as I want. But, in the real world pipeline I have, I don’t want to run the end to end tests on everything. Usually it is selected scenarios.
I am currently running: GitLab Enterprise Edition 13.2.2-ee
I am attempting to create a CI/CD pipeline with Travis CI that tests the front-end, tests the back-end, and deploys. The front-end is using Node, the back-end is using Go.
My repository is structured as follows:
- client
- DockerFile
- ...(front-end code)
- server
- DockerFile
- ...(back-end code)
- .travis.yml
Would I be able to utilize the DockerFiles in some fashion to execute tests for both sides of the application and have Travis report their results properly?
I'm not well versed with either tools so I was hoping to get some input before I dig myself into a hole. I plan on using a combination of Travis stages and docker build/docker run commands. Something like this:
jobs:
include:
- stage: test client side
before_script:
- cd client
- docker build ...
script:
docker run image /bin/sh -c "run node tests"
after_script:
- cd ..
- stage: test server side
before_script:
- cd server
script:
docker run image /bin/sh -c "run go tests"
after_script:
- cd ..
- stage: deploy
script: skip
deploy:
- provider: s3
skip_cleanup: true
on:
branch: master
This doc page makes it looks promising, but the inclusion of language: ruby and script: - bundle exec rake test throws me off. I am not sure why Ruby is required if the tests are ran through docker (at least that's what it looks like).
Update 1
I believe I got it to work correctly with the client side of the application.
Here is what I got:
services:
- docker
jobs:
include:
- stage: test
before_script:
- docker pull node:12
script:
- docker run --rm -e CI=true -v $(pwd)/client:/src node:12 /bin/sh -c "cd src; npm install; npm test"