Gitlab CI node_modules issue - gitlab

I have two pipelines with two jobs, one job is to install my npm packages, and the other is to bundle and deploy. I have one pipeline that is running on master when I do merge requests and another pipeline that is triggered via webhooks. My merge requests works fine but the webhook does not, and I have noticed the following difference:
In my merge requests my npm package is installing and finding all of the necessary packages / files but in my webhook trigger even though it’s using the same commit and branch it doesn’t seem to be installing all of the packages.
Image with all packages
Image where it seems it’s not installing the same number of packages
Is there a reason why this us happening even though they are in the same branch and using the same commit but one is a merge request while the other is a pipeline trigger? Is there something I am missing? Thanks.
Below is the job that is failing
production_publish:
stage: publish
before_script:
- npm config set registry https://npm.ef.com/
- npm config set //npm.ef.com/:_authToken ${EF_NPM_TOKEN}
script:
- npm install
- npm run bundle
- node ./devops/deployStatic
only:
refs:
- pipelines
- master
variables:
- $NODE_ENV == "production"
except:
refs:
- staging
- pushes
- merge_requests
tags:
- storyblok
- prod
Below is the job that is working fine
install:
stage: install
script:
- npm config set registry https://npm.ef.com/
- npm config set //npm.ef.com/:_authToken ${EF_NPM_TOKEN}
- npm install
cache:
key: ${CI_COMMIT_REF_NAME}-${CI_JOB_NAME}
paths:
- node_modules/
artifacts:
paths:
- node_modules/
expire_in: 1 mos
only:
refs:
- master
except:
refs:
- triggers
- staging
tags:
- storyblok
- prod
e1_id_production_deploy_next_server:
stage: deploy
before_script:
- export COMMIT_TIME=$(git show -s --format=%ct $CI_COMMIT_SHA)
- export COMMIT_TAG=$(git show -s --format=%H $CI_COMMIT_TAG)
- export PRODUCT=$(echo $CI_JOB_NAME | cut -d '_' -f 1)
- export REGION=$(echo $CI_JOB_NAME | cut -d '_' -f 2)
- export NODE_ENV=$(echo $CI_JOB_NAME | cut -d '_' -f 3)
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip ./builds/server_build_$COMMIT_TAG.zip -rq * .[^.]* .next/\* -x out/\* -x .git/\*
- node ./devops/deployServer
only:
refs:
- master
except:
- triggers
tags:
- storyblok
- prod
dependencies:
- install
The main problem again is in the npm install for both cases, for the first one it doesn't seem to be installing all of my packages. Thanks ahead of time for your help

I figured it out, the main issue was in certain packages being set in the devDependencies.
Our staging environment was working fine because NODE_ENV is not set to production
Our master environment that was set in 2 jobs was working because we were actually setting the NODE_ENV after the npm install (in the second job). In the other job the NODE_ENV was being set before npm install and when NODE_ENV=production npm install does not install devDependencies
Fix was to add the packages required from devDependencies as dependencies

Related

How to deploy Angular App with GitLab CI/CD

I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?

How to write .gitlab-ci.yml to build/deploy with conditions

I am new to CI/CD and Gitlab. I have a CI/CD script to test, build and deploy and I use 2 branches and 2 EC2. My goal is to have a light and not redundant script to build and deploy my changes in functions of the branch.
Currently my script looks like this but after looking the Gitlab doc I saw many conditionals keywords like rules but I'm really lost about how I can use conditional format in my script to optimise it.
Is there a way to use condition and run some script if there is a merge from a branch or from an other? Thanks in advance!
#image: alpine
image: "python:3.7"
before_script:
- python --version
stages:
- test
- build_staging
- build_prod
- deploy_staging
- deploy_prod
test:
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
only:
refs:
- develop
build_staging:
stage: build_staging
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build:staging
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- develop
build_prod:
stage: build_prod
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- master
deploy_staging:
stage: deploy_staging
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
only:
refs:
- develop
deploy_prod:
stage: deploy_prod
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
only:
refs:
- master
Gitlab introduces rules for includes with version 14.2
include:
- local: builds.yml
rules:
- if: '$INCLUDE_BUILDS == "true"'
A good pattern as your cicd grows in complexity is to use includes and extend keywords. For example you could implement the following in your root level .gitlab-ci.yml file:
# best practice is to pin to a specific version of node or build your own image to avoid surprises
image: node:12
# stages don't need an environment appended to them; you'll see why in the included file
stages:
- build
- test
- deploy
# cache node modules in between jobs on a per branch basis like this
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
# include other definitions
includes:
- './ci-templates/.foo-app-ci.yml'
Then in another folder (or even another repository) you can include other templates. I didn't fully refactor this out for you but I hope this gives you the idea of not only how to use a rule to trigger your job but also how you can start to make reusable snippets and build on them to reduce the overall complexity. See the yaml comments for guidance on why I did things a certain way. example .foo-app-ci.yml file
# this script was repeated so define it once and reference it via anchor
.npm:install: &npm:install
- npm ci --cache .npm --prefer-offline # to use the cache you'll need to do this before installing dependencies
- cd client
- npm install
- npm update
# you probably want the same rules for each stage. define once and reuse them via anchor
.staging:rules: &staging:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == 'develop' # Run this job when commits are pushed or merged to the develop branch
.prod:rules: &prod:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
# many parts of the build stage were repeated; define it once and lets extend from it
.build:template: &build:template
stage: build
before_script:
- &npm:install
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
# many parts of the deploy stage were repeated; define it once and lets extend from it
.deploy:template: &deploy:template
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
- apt-get --assume-yes install -y shellcheck
# here we extend from the build template to run the staging specific build
build:staging:
extends: *build:template
environment: staging
script:
- npm run build:staging
rules:
- *staging:rules
# this is kind of an oddball... not used to seeing python to test a node app. we're not able to reuse as much here
test:staging:
image: "python:3.7"
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
rules:
- *staging:rules # apply staging rules to trigger test stage
needs:
- job: build:staging # normally we want to build before test; this will trigger test after the build
# here we extend from the build template to run the prod specific build
build:prod:
extends: *build:template
environment: prod
script:
- npm run build
rules:
- *prod:rules
# same thing for the deploy phases... extend from the deploy template for env specific requirements
deploy:staging:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
rules:
- *staging:rules
needs:
- job: build:staging
artifacts: true
deploy:prod:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
rules:
- *prod:rules
needs:
- job: build:prod
artifacts: true
I would start basic and as you start to get comfortable with a working pipeline you can experiment with further enhancements and breaking out into more fragments. Hope this helps!

"npm ci" command causing longer build time on gitlab than the "npm i"

Due to certain suggestions received from different articles, we have decided to use the "npm ci" to install the node dependencies from package-lock.json file to avoid breaking changes instead of using the "npm install" script.
But after making this change in .gitlab-ci.yml file, the builds are taking much more time to get install the dependencies. It has been increased from 7 minutes to around 23 minutes.
As per the attached screenshot, it looks like taking more time in removing the existing node_modules folder before installation -
Below are some details from the script file -
image: docker:latest
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
stages:
- test and build
# - documentation-server
- deploy
variables:
GIT_STRATEGY: clone
# ELECTRON_SKIP_BINARY_DOWNLOAD: 1
build:library:
image: trion/ng-cli-karma
stage: test and build
only:
- master
- /^.*/#library_name
script:
- echo _auth=${NPM_TOKEN} >> .npmrc
- mkdir -p dist/core
- cd dist/core
- npm init -y
- cd ../..
- ls -al /hugo
- npm ci
Any help or suggestions would be really helpful to fix this issue.

Caching npm dependencies in circleci

I'm setting up the CI for an existing Express server project that lives in my repo's backend/core folder. Starting with just basic setup and linting. I was able to get npm install and linting to work but I wanted to cache the dependencies so that it wouldn't take 4 minutes to load for each push.
I used the caching scheme they describe here but it still seemed to run the full install each time. Or if it was using cached dependencies, it installed grpc each time which took a while. Any ideas what I can do?
My config.yml for reference:
# Use the latest 2.1 version of CircleCI pipeline process engine. See: https://circleci.com/docs/2.0/configuration-reference
# default executors
executors:
core-executor:
docker:
- image: 'cimg/base:stable'
commands:
init_env:
description: initialize environment
steps:
- checkout
- node/install
- restore_cache:
keys:
# when lock file changes, use increasingly general patterns to restore cache
- node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
- node-v1-{{ .Branch }}-
- node-v1-
- run: npm --prefix ./backend/core install
- save_cache:
paths:
- ~/backend/core/usr/local/lib/node_modules # location depends on npm version
key: node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
jobs:
install-node:
executor: core-executor
steps:
- checkout
- node/install
- run: node --version
- run: pwd
- run: ls -A
- run: npm --prefix ./backend/core install
lint:
executor: core-executor
steps:
- init_env
- run: pwd
- run: ls -A
- run: ls backend
- run: ls backend/core -A
- run: npm --prefix ./backend/core run lint
orbs:
node: circleci/node#4.1.0
version: 2.1
workflows:
test_my_app:
jobs:
#- install-node
- lint
#requires:
#- install-node
I think the best thing to do is to use npm ci which is faster. Best explanation of this is here: https://stackoverflow.com/a/53325242/4410223. Even though it will reinstall every time, it is consistent so better than caching. Although when using this, I am unsure what the point of continuing to use cache in your pipeline is, but caching still seems to be recommended with npm ci.
However, the best way to do this is to just use the node orb you already have in your config. A single step of - node/install-packages will do all that work for you. You will be able to replace it with your restore_cache, npm install and save_cache steps. You can even see all the steps it does here: https://circleci.com/developer/orbs/orb/circleci/node#commands-install-packages. Just open the command source and look at the steps on line 71.

GitLab CI: configure yaml file on NodeJS

I have problem with test scss-lint in my project on nodejs.
When tests reach scss-lint, it gives an error.
How to make sure that tests do not fall with the successful result of the test itself?
My gitlab-ci.yml
image: node:wheezy
cache:
paths:
- node_modules/
stages:
- build
- test
gem_lint:
image: ruby:latest
stage: build
script:
- gem install scss_lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
scss-lint:
stage: test
script:
- npm run lint:scss-lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
You are doing it wrong.
Each job you define (gem_lint, install_dependencies, and scss-lint) is run with its own context.
So your problem here is that during the last step, it doesn't find the scss-lint gem you installed because it switched its context.
You should execute all the scripts at the same time in the same context :
script:
- gem install scss_lint
- npm install
- npm run lint:scss-lint
Of course for this you need to have a docker image that has both npm and gem installed maybe you can find one on docker hub), or you can choose one (for example : ruby:latest) and add as the first script another one that would install npm :
- curl -sL https://deb.nodesource.com/setup_6.x | bash -

Resources