I'm having some trouble running Cypress tests within our GitLab CI pipeline. As soon as I start the Quasar development server with yarn quasar dev it looks as if it is actually starting, but then it seems to be kind of locked up.
This state was for like an hour before GitLab killed the job.
Here is the pipeline definition. Please note that this is a simplified version for the sake of this question.
.gitlab-ci.yml
---
variables:
FF_USE_FASTZIP: "true"
YARN_CACHE_FOLDER: "$CI_PROJECT_DIR/.cache/yarn"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/.cache/Cypress"
.cache_configuration:
cache:
key:
files:
- yarn.lock
paths:
- package.json
- yarn.lock
- .cache/
- node_modules/
- dist/spa/
stages:
- test
# Install dependencies and start Quasar dev server
ui-chrome:
stage: test
image: cypress/browsers:node16.14.0-chrome99-ff97
extends: .cache_configuration
cache:
policy: pull
script:
- yarn install --frozen-lockfile # this works
- yarn quasar dev # this seems to work but causes the lock up
- yarn cypress run # this does not work
rules:
- if: '$CI_COMMIT_TAG =~ /^\d+.\d+.\d+$/'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_MERGE_REQUEST_IID && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "develop"
quasar dev is a non-ending process. Under normal conditions, you wouldn't want the development server to close itself. In this case, you want the development server to run while the Cypress tests are running, then to shut down after the tests are finished.
Quasar Cypress App Extension(AE) adds a few package scripts you can use. It uses start-server-and-test under the hood. So, you can either directly use the scripts the AE provides, update them, or read start-server-and-test and create your own.
Here is the command for yarn test:e2e:ci package script:
cross-env NODE_ENV=test start-test "quasar dev" http-get://localhost:8080 "cypress run"
So, you can replace the following in your code:
script:
- yarn install --frozen-lockfile # this works
- yarn quasar dev # this seems to work but causes the lock up
- yarn cypress run # this does not work
to
script:
- yarn install --frozen-lockfile
- yarn test:e2e:ci
Related
I am using yarn workspaces (yarn version 1.22.19) and I would like to run tests for all workspaces, without stopping even if tests fail for one of the workspaces.
This is so I can collect all failing tests across all workspaces in one run. I'm running the tests on a github action.
I am running the following command:
yarn workspaces run test --passWithNoTests
All workspaces have a test script in the package.json that runs the tests with Jest.
Jest returns an exit code of 1 when tests fail. This causes the yarn workspaces run command to fail and stop. I would like it to continue and fail only after running tests for all workspaces.
How can I make the yarn workspaces run continue even if tests fail for one of the workspaces, yet still have it fail at the end?
Edit:
I am running bash.
Using workarounds like set -e or || true might help swallow the error, but I do want the command to fail ultimately, I just want it to fail after running all tests.
For example:
Say I have 3 workspaces - workspace a, workspace b and workspace c. All of them have the following script in their package.json:
test: "jest"
Say tests pass for workspace a and workspace c, but fail for workspace b. My desired result is that running yarn workspaces run test will run tests for all workspaces (and not stop after tests fail for workspace b) but for it to fail after running all tests.
Here is my github workflow. It just installs dependencies and runs the test script which runs the command yarn workspaces run test --passWithNoTests.
name: Run All Tests
on:
pull_request:
branches: ['develop']
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: yarn install --frozen-lockfile
- name: Run tests
run: yarn test
For future reference, this is what I ended up doing:
Add a test:ci script to all package.json file with the following defintion:
"test:ci": "jest --ci --reporters=jest-junit --reporters=default --passWithNoTests || true"
This makes it so the command passes even if tests fails.
Use a jest-junit test reporter to output an xml with test results.
In the action, run yarn workspaces run test which runs tests for all workspaces (packages).
Use the dorny/test-reporter#v1 to collect all test result xml files into a nice view.
Set fail-on-error: 'true' for the action dorny/test-reporter#v1 which will make the step fail if any test failed.
Here's the full github workflow:
# This workflow will do a clean installation of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs
name: Build & Test
on:
pull_request:
branches: ['develop']
# cancel any previous runs that are still in progress if a new commit is pushed
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16.x]
steps:
- uses: actions/checkout#v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: yarn install --frozen-lockfile
- name: Run tests
run: yarn test:ci
- name: Unlink all symbolic links # so we don't go over the same file twice
if: success() || failure() # run this step even if previous step failed
run: find node_modules -type l -exec unlink {} \;
- name: Test Report
uses: dorny/test-reporter#v1
if: success() || failure() # run this step even if previous step failed
with:
name: Jest Test Results # Name of the check run which will be created
path: '**/jest-junit.xml'
reporter: jest-junit # Format of test results
list-suites: 'failed'
list-tests: 'failed'
fail-on-error: 'true'
This achieves everything I was looking for - running all tests for all workspaces, even if some fail, while still failing the workflow.
I want to run Playwright tests on GitLab.
For this I must to run frontend and api, because tests needed it and I can't understand how to do this.
job in .gitlab-ci:
all_test_in_one:
stage: all_test
image: mcr.microsoft.com/playwright:v1.28.0-focal
services:
- name: postgres:latest
alias: postgres
interruptible: true
variables:
NX_DATABASE_HOST: postgres
NX_DATABASE_USERNAME: $POSTGRES_USER
NX_DATABASE_PASSWORD: $POSTGRES_PASSWORD
NX_TEST_DATABASE_NAME: $POSTGRES_DB
NX_DATABASE_PORT: 5432
script:
- yarn install --cache-folder .yarn-cache
- yarn nx run-many --all --target=lint
- yarn nx run-many --all --target=test
- npx nx serve frontend
- npx nx serve api
- npx nx e2e frontend-e2e
only:
- main
- pushes
In gitlab after frontend run - nothing happens.
Just:
webpack compiled successfully (b083f640d01806f6)
No issues found.
and then:
*ERROR: Job failed: execution took longer than 1h0m0s seconds *
Maybe I must use the value parallel: 2, but maybe its not what I need...
I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?
This is my actions
# This is a basic workflow to help you get started with Actions
name: CI/CD Akper Bina Insan - Live
on:
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
- name: Install Yarn
run: npm install -g yarn
- name: Build UI - Akper Bina Insan
working-directory: ./ui
run: yarn && yarn build && yarn start
- name: Build Backend Service - Akper Bina Insan
working-directory: ./be
run: yarn && yarn build && yarn prod
After it finished with the first run, it don't go continue with the second run. Even though the server is ready. I waited for 5 minutes then I stopped since I don't want to waste time.
How can I make it run for the second one?
GitHub actions are a tool for CI/CD and not for hosting (running) your application.
In the given workflows, you build an then run your UI application. The run command is a blocking process - e.g. your workflow will remain blocked because you have started your UI application. You should not do that in workflows.
Use GitHub Action for build and test, but not for hosting.
Adding an ending & to the node command solved this for my use-case
run: yarn && yarn build && yarn start &
or
run: |
yarn
yarn build
yarn start &
Using & in the end of a command, the shell executes the command in the background in a subshell, therefore running the first yarn start won't be blocking your next step in the workflow.
I'm trying to deploy my webapp to firebase hosting through a bitbucket pipeline, It's not deploying correctly in the pipeline but in the console it works no problem. This is what I do in the console:
npm run build
firebase login:ci
firebase deploy --project $PROJECT_NAME
In the pipeline I'm running this YAML script:
image: node:10.15.3
pipelines:
default:
- step:
name: Install and Build App
caches:
- node
script:
- npm install
- CI=false npm run build
artifacts:
- build/
- step:
name: Deploy App to Firebase
deployment: production
script:
- pipe: atlassian/firebase-deploy:0.6.0
variables:
KEY_FILE: $KEY_FILE
PROJECT_ID: $PROJECT_ID
I think it might have to do with the .firebaserc but I'm not sure. this is the .firebaserc:
firebase target:apply hosting $PROJECT_ID $DOMAIN
Maybe someone can shed some light on why this isn't working, I'm new to pipeline scripts and I don't really see the issue, it succeeds in deploying to firebase hosting but It's not working at all on the actual domain.
When you run the command firebase login:ci that should generate a TOKEN, you add that token in Bitbucket in your Repository Settings > Repository Variables. What ever name you choose should match your pipeline. In my example I use FIREBASE_TOKEN_CI. When I commit my changes to bitbucket, it runs the pipeline, builds and deploys.
You can always modify your script in your package.json so in your cli you can run npm run build:prod like you would run npm run start, etc and use the build:prod in the yml.
here is an example:
"scripts": {
"ng": "ng",
"start": "ng serve",
"build:prod": "ng build --prod=true"
}
CODE BELOW is a pipeline.yml I use for Ionic/Angular
NOTE: Artifacts is the folder your build files are generated after running build. Angular is called dist, so you might use dist/. My example uses www/** that is Ionics build output. You have some CI=False in your example, I have not seen that nor use that and my project builds and deploys. My second script is for cloud functions
- cd functions
- npm install
- cd ..
you can omit that part if you don't have functions. I have recently had a error about OAuth and I had to generate a new token with login:ci and replace my token, and it was working again for deploy. Hope this helps anyone. I had problems at first also and found a working format that I can adapt to other frameworks.
image: node:10.15.3
pipelines:
default:
- step:
name: Install, Build
caches:
- node
deployment: test
script:
- npm install
- npm run build:prod
artifacts:
- www/**
- step:
name: Deploy to Firebase
deployment: production
script:
- cd functions
- npm install
- cd ..
- pipe: atlassian/firebase-deploy:0.3.4
variables:
FIREBASE_TOKEN: '$FIREBASE_TOKEN_CI'