I have script for pipeline like this .
I have two stages one for dependencies and tests
On the tests i have error when the script below is launched
- node_modules/.bin/ng test --code-coverage --watch=false --browsers=GitlabHeadlessChrome
image: ubuntu:latest
variables:
OUTPUT_PATH: "$CI_PROJECT_DIR/artifacts"
stages:
- install
- test
cache:
key:
files:
- package-lock.json
paths:
- node_modules
policy: pull
install_dependencies:
stage: install
image: node:15.13.0-alpine3.10
script:
- npm install
cache:
key:
files:
- package-lock.json
paths:
- node_modules
test_verydashboard:
stage: test
needs: [ "install_dependencies" ]
image: node:12-alpine
before_script:
- apk add chromium
- export CHROME_BIN=/usr/bin/chromium-browser
script:
- node_modules/.bin/ng test --code-coverage --watch=false --browsers=GitlabHeadlessChrome
coverage: '/Statements\s+:\s\d+.\d+%/'
artifacts:
name: "tests-and-coverage"
reports:
coverage_report:
coverage_format: cobertura
path: $OUTPUT_PATH/coverage/cobertura-coverage.xml
cache:
key:
files:
- yarn.lock
paths:
- node_modules
policy: pull
and on the test stage i have this error
Related
I am trying to use a common configuration for two jobs in my gitlab ci pipeline.
Purpose is to trigger end-to-end testing with firebase both locally and remotely.
.gitlab-ci.yml:
before_script:
- npm i
- npm i -g firebase-tools
cache:
key:
files:
- package-lock.json
paths:
- ~/node_modules
.e2e:
stage: 'test'
image: cypress/base:16.13.0
script:
- npm i -g firebase-tools
- apt update
- apt -y install default-jre # For firebase emulators
- apt -y install default-jdk # For firebase emulators
- firebase use test --token "$FIREBASE_TOKEN"
- npm ci
e2e:local:
extends: .e2e
script:
- npm run e2e:local
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
artifacts:
paths:
- cypress/videos
- cypress/screenshots
expire_in: 5 day
The problem here is that when the job e2e:local runs, it throws an error, because it does not find firebase command, which means the "npm i -g firebase-tools" command of the .e2e job is not triggered.
Where am I wrong ?
Thanks for help
When you use extends in a job, and then the step 'script', what it does is overwrite the job you are extending.
What you can do is use the after_script step at e2e:local:
e2e:local:
extends: .e2e
after_script:
- npm run e2e:local
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
artifacts:
paths:
- cypress/videos
- cypress/screenshots
expire_in: 5 day
Another solution could be to make use of !reference
https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#reference-tags
e2e:local:
extends: .e2e
script:
- !reference [.e2e, script]
- npm run e2e:local
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
artifacts:
paths:
- cypress/videos
- cypress/screenshots
expire_in: 5 day
Currently I have this script in my .gitlab-ci.yml file:
image: node:16
cache:
paths:
- .npm
- cache/Cypress
- node_modules
stages:
- build
- deploy
- test
install:dependencies:
stage: build
script:
- yarn install
artifacts:
paths:
- node_modules/
only:
- merge_requests
test:unit:
stage: test
script: yarn test --ci --coverage
needs: ["install:dependencies"]
artifacts:
when: always
paths:
- coverage
expire_in: 30 days
only:
- merge_requests
deploy-to-vercel:
stage: deploy
image: node:16
script:
- npm i -g vercel
- DEPLOYMENT_URL=$(vercel -t $VERCEL_TOKEN --confirm)
- echo $DEPLOYMENT_URL > vercel_deployment_url.txt
- cat vercel_deployment_url.txt
artifacts:
when: on_success
paths:
- vercel_deployment_url.txt
only:
- merge_requests
I need to trigger a pipeline to an environment called playground but only when a pipeline from test enviroment is finished, when a pipeline to master happens, I don't to mirror to the playground environment.
Everything is deployed to vercel, and the project is powered by Next JS.
I need to set up a demo server, which is a copy of the production server, but pointed at a different API. I want to run 2 separate build/deploy whenever the main branch is updated to accomplish this, as I need to run the demo build (Vue) to use different env variables pointing at the demo API (which will also need a dual deploy). Is this possible, and how would I go about it? Here's the existing:
stages:
- build
- deploy
- test
include:
- template: Security/SAST.gitlab-ci.yml
- template: Security/Secret-Detection.gitlab-ci.yml
build-main:
image: node:12
stage: build
only:
- main
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:prod
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-main:
stage: deploy
only:
- main
script:
- echo $CI_PROJECT_DIR
- whoami
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console
tags:
- deploy
build-beta:
image: node:12
stage: build
only:
- beta
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:beta
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-beta:
stage: deploy
only:
- beta
script:
- echo $CI_PROJECT_DIR
- whoami
# - sudo /usr/local/bin/rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.beta
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.beta
tags:
- deploy
build-dev:
image: node:12
stage: build
only:
- dev
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:dev
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-dev:
stage: deploy
only:
- dev
script:
- echo $CI_PROJECT_DIR
- whoami
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.dev
tags:
- deploy
sast:
stage: test
artifacts:
reports:
sast: gl-sast-report.json
paths:
- 'gl-sast-report.json'
You can do something like the following ,that will create 2 build jobs and 2 deploy jobs, that are linked together using needs:
stages:
- build
- deploy
build-main:
stage: build
script: echo
only:
- main
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-main:
stage: deploy
script: echo
only:
- main
needs:
- job: build-main
artifacts: true
build-demo:
stage: build
script: echo
only:
- main
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-demo:
stage: deploy
script: echo
only:
- main
needs:
- job: build-demo
artifacts: true
you might also want to extract common jobs to hidden jobs to simplify your pipeline, for instance:
stages:
- build
- deploy
.build:
stage: build
artifacts:
expire_in: 1 hour
paths:
- dist
build-main:
extends: .build
only: main
# other specific codes
Also you might want to improve readability and management of workflow rules like the following, which will centralize rules logics:
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables:
DEPLOY_PROD: "true"
DEPOLOY_DEMO: "true"
# Some other conditional variables
- if: $CI_COMMIT_REF_NAME == "dev"
variables:
DEPLOY_DEV: "true"
- when: always
build-main:
rules:
- if: $DEPLOY_PROD
# other jobs
version: 2
jobs:
test:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "Running tests"
- run: npm install
- run: npm test
build:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "build project"
- npm install
- npm run build
workflows:
version: 2
test_build:
jobs:
- test
- build:
requires:
- test
The above YAML is my config.yml for CircleCI, but I get this error
Config does not conform to schema: {:workflows {:test_and_build {:jobs [nil {:build (not (map? nil)), :requires (not (map? a-clojure.lang.LazySeq))}]}}}
Another observation is if I run the jobs in parallel, they run without any errors.
That is if I remove the requires: - test as shown below
workflows:
version: 2
test_build:
jobs:
- test
- build
build is a job, just like test, and should be indented the same way it is:
version: 2
jobs:
test:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "Running tests"
- run: npm install
- run: npm test
build:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "build project"
- npm install
- npm run build
workflows:
version: 2
test_build:
jobs:
- test
- build:
requires:
- test
I tried this one and it worked. The problem with the previous one seemed to be related to versioning. CircleCI cloud 2.1 and CircleCI server 2. Also, I decided to use the node orbs this time.
version: 2.1
orbs:
node: circleci/node#3.0.1
jobs:
build:
working_directory: ~/backend_api
executor: node/default
steps:
- checkout
- node/install-npm
- node/install-packages:
app-dir: ~/backend_api
cache-path: node_modules
override-ci-command: npm i
- persist_to_workspace:
root: .
paths:
- .
test:
docker:
- image: cimg/node:current
steps:
- attach_workspace:
at: .
- run:
name: Test
command: npm test
workflows:
version: 2
build_and_test:
jobs:
- build
- test:
requires:
- build
I have problem with test scss-lint in my project on nodejs.
When tests reach scss-lint, it gives an error.
How to make sure that tests do not fall with the successful result of the test itself?
My gitlab-ci.yml
image: node:wheezy
cache:
paths:
- node_modules/
stages:
- build
- test
gem_lint:
image: ruby:latest
stage: build
script:
- gem install scss_lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
scss-lint:
stage: test
script:
- npm run lint:scss-lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
You are doing it wrong.
Each job you define (gem_lint, install_dependencies, and scss-lint) is run with its own context.
So your problem here is that during the last step, it doesn't find the scss-lint gem you installed because it switched its context.
You should execute all the scripts at the same time in the same context :
script:
- gem install scss_lint
- npm install
- npm run lint:scss-lint
Of course for this you need to have a docker image that has both npm and gem installed maybe you can find one on docker hub), or you can choose one (for example : ruby:latest) and add as the first script another one that would install npm :
- curl -sL https://deb.nodesource.com/setup_6.x | bash -