How can I run cypress test in parallel inside GitLab runner? (Without using cypress dashboard) - node.js

I've created around 180 end-to-end tests around a web application. Now I can't afford to run those sequentially. I've tried running it in parallel via the cypress dashboard. But they provide only 500 test runs per month and then it doesn't work in parallel. In my git lab runner, I am seeing this error:
Can anyone suggest how can I run tests in parallel with cypress and GitLab only?

You will need to do some hackings/workarounds hahaha but will work.
First you need to have in your root path the file .gitlab-ci.yml. In your .gitlab-ci.yml define how many parallel jobs do you want, in my case I will use as example two parallel jobs to run the same tests in different browsers(chrome and firefox):
stages:
- triggers
smoke-test-chrome:
stage: triggers
trigger:
include: gitlab-ci/smoke-test-chrome/.gitlab-ci.yml
smoke-test-firefox:
stage: triggers
trigger:
include: gitlab-ci/smoke-test-firefox/.gitlab-ci.yml
Now you need to create the .gitlab-ci.yml for each parallel job in different directories but all included in a main directory called gitlab-ci/
In my example I will create two files with the following path:
gitlab-ci/smoke-test-chrome/.gitlab-ci.yml
gitlab-ci/smoke-test-firefox/.gitlab-ci.yml
In the gitlab-ci/smoke-test-chrome/.gitlab-ci.yml file I will have:
stages:
- triggers
smoke-test-chrome:
image: cypress/browsers:node16.14.0-slim-chrome99-ff97
stage: triggers
script:
- npm ci
- npm run smoke:test-chrome
And in the gitlab-ci/smoke-test-firefox/.gitlab-ci.yml file I will have:
stages:
- triggers
smoke-test-firefox:
image: cypress/browsers:node16.14.0-slim-chrome99-ff97
stage: triggers
script:
- npm ci
- npm run smoke:test-firefox
Last create your specific scripts in your package.json, in my example I created:
"smoke:test-chrome": "cypress run --browser chrome --spec 'cypress/integration/Signup/SmokeTests.test.ts'",
"smoke:test-firefox": "cypress run --browser firefox --spec 'cypress/integration/Signup/SmokeTests.test.ts'",
In your case, you can just change the spec files you want to called in each specific job by changing those specs in package.json script and calling those scripts in the .gitlab-ci.yml files you have created before

Related

How to set up a scheduled cypress test on github action to run one spec file only?

I have set up a cron job for my cypress tests however I want to run it on a specific test alone.
This is my cron job:
name: Cypress Tests
on: [push]
jobs:
cypress-run:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Install NPM dependencies, cache them correctly
# and run all Cypress tests
- name: Cypress run
uses: cypress-io/github-action#v4.x.x # use the explicit version number
with:
build: npm run build
start: npm start
I just want to run this one spec file:
Is there a way to run this spec file alone?
Try this out!
Add new command in script section of package.json. Use file or folder here.
"scripts": {
"cypress:spec-run": "cypress run --browser chrome --spec test.spec.js"
},
Run the commond in respective crone job
npm run cypress:spec-run

Use cache with multiple image in gitlab CICD

I am making a gitlab CI/CD pipeline that uses two different image.
One of them necessitate the installations of some package using npm. In order to avoid multiple-time installation I've added some cache.
Let's see this example :
stages:
- build
- quality
cache:
paths:
- node_modules/
build-one:
image: node:latest
stage: build
script:
- npm install <some package>
build-two:
image: foo_image:latest
stage: build
script:
- some cmd
quality:
image: node:latest
stage: quality
script:
- <some cmd using the previously installed package>
The fact of having two different docker images forces me to specify it inside the job definition. So from my tests the cache isn't actually used and the command executed by the quality job will fail because the package isn't installed.
Is there a solution to this problem ?
Many thanks !
Kev'.
There can be two cases
Same runner is being used to run all the jobs. In this case the way to specified cache should work fine.
Different runners are being used to run different jobs. So suppose build job runs with runner 1 and quality jobs is running with runner 2 so the cache will only be present in runner 1.
In order to make use of caching in case 2 you will have to use distributed caching.
Then runner 1 will run the build job it will push the cache to s3 and runner 2 will pull with cache during the quality job and then can use that.

Gitlab Runner taking Too long to run the pipeline

I had created a specific runner for my gitlab project,
its taking too long to run the pipeline.
Its is getting stuck in Cypress test mainly.
After "All Specs passed" it will not move forward.
- build
- test
build:
stage: build
image: gradle:jdk11
script:
- gradle --no-daemon build
artifacts:
paths:
- build/distributions
expire_in: 1 day
when: always
junit-test:
stage: test
image: gradle:jdk11
dependencies: []
script:
- gradle test
timeout: 5m
cypress-test:
stage: test
image: registry.gitlab.com/sahajsoft/gurukul2022/csv-parser-srijan:latestSrigin2
dependencies:
- build
script:
- unzip -q build/distributions/csv-parser-srijan-1.0-SNAPSHOT.zip -d build/distributions
- sh build/distributions/csv-parser-srijan-1.0-SNAPSHOT/bin/csv-parser-srijan &
- npm install --save-dev cypress-file-upload
- npx cypress run --browser chrome
A recommended approach is to try and replicate your script (from your pipeline) locally, from your computer.
It will allow to check:
how long those commands take
if there is any interactive step, where a command might expect a user entry, and wait for stdin.
The second point would explain why, in an unattended environment like a pipeline one, "it will not move forward".

Append pytest coverage to file in Gitlab CI artifacts

I am trying to split my pytests in a gitlab stage to reduce the time it takes to run them. However, I am having difficulties getting the full coverage report. I am unable to use pytest-xdist or pytest-parallel due to the way our database is set up.
Build:
stage: Build
script:
- *setup-build
- *build
- touch banana.xml # where I write the code coverage collected by pytest-cov
- *push
artifacts:
paths:
- banana.xml
reports:
cobertura: banana.xml
Unit Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m unit --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
Validity Test:
stage: Test
script:
- *setup-build
- *pull
- docker-compose run $DOCKER_IMG_NAME pytest -m validity --cov-report xml:banana.xml --cov=app --cov-append;
needs:
- Build
After these two stages run (Build - 1 job, Test - 2 jobs), I go to download the banana.xml file from Gitlab but there's nothing in it, even though the jobs say Coverage XML written to file banana.xml
Am I missing something with how to get the total coverage written to an artifact file when splitting up marked tests in a gitlab pipeline stage?
If you want to combine the coverage reports of several different jobs, you will have to add another stage which will run after your tests. Here is a working example for me :
# You need to define the Test stage before the Coverage stage
stages:
- Test
- Coverage
# Your first test job
unit_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.unit coverage run --rcfile=.coveragerc -m pytest ./unit
artifacts:
paths:
- .coverage.unit
# Your second test job which will run in parallel
validity_test:
stage: Test
script:
- COVERAGE_FILE=.coverage.validity coverage run --rcfile=.coveragerc -m pytest ./validity
artifacts:
paths:
- .coverage.validity
# Your coverage job, which will combine the coverage data from the two tests jobs and generate a report
coverage:
stage: Coverage
script:
- coverage combine --rcfile=.coveragerc
- coverage report
- coverage xml -o coverage.xml
coverage: '/\d+\%\s*$/'
artifacts:
reports:
cobertura: coverage.xml
You also need to create a .coveragerc file in your repository with the following content, to specify that coverage.py needs to use relative file paths, because your tests were run on different gitlab runners, so their full path don't match :
[run]
relative_files = True
source =
./
Note : In your case it's better to use the coverage command directly (so coverage run -m pytest instead of pytest) because it provides more options, and it's what pytest uses under the hood anyway.
The issue in your file is that you start with creating an empty file, try to generate a report from that (which won't generate anything since the file is empty), and then pass it over to both test jobs, which both overwrite it with their local coverage report separately, and then never use it.
You need to do it the other way round, as shown in my example : run the tests first, and in a later stage, get both the test coverage data, and generate a report from that.

CircleCI: Trigger test post-hook only on certain branches

I have a circle.yml file that looks something like this:
general:
branches:
only:
- master
- develop
- /release-[0-9]+(\.[0-9]+)*/
test:
pre:
- docker-compose run $SERVICE npm install
override:
- docker-compose run $SERVICE npm test
post:
- docker-compose run SPECIFIC_COMMAND // this should only trigger for branches that fall under /release-[0-9]+(\.[0-9]+)*/
- docker-compose stop
Unit tests run when merging to master, develop, or /release-[0-9]+(\.[0-9]+)*/.
However, there's a certain command in the post-hook under tests that I would only like to trigger when merging into /release-[0-9]+(\.[0-9]+)*/. This command must run before the final one, docker-compose stop, which is why I haven't used a deployment block.
Turns out this isn't quite possible in the test block (unlike the branches or deployment blocks).
The best way around this was to place the conditional logic in a shell script that accessed the $CIRCLE_BRANCH environment variable. The shell script in turn would be triggered all the time.

Resources