Caching npm dependencies in circleci - node.js

I'm setting up the CI for an existing Express server project that lives in my repo's backend/core folder. Starting with just basic setup and linting. I was able to get npm install and linting to work but I wanted to cache the dependencies so that it wouldn't take 4 minutes to load for each push.
I used the caching scheme they describe here but it still seemed to run the full install each time. Or if it was using cached dependencies, it installed grpc each time which took a while. Any ideas what I can do?
My config.yml for reference:
# Use the latest 2.1 version of CircleCI pipeline process engine. See: https://circleci.com/docs/2.0/configuration-reference
# default executors
executors:
core-executor:
docker:
- image: 'cimg/base:stable'
commands:
init_env:
description: initialize environment
steps:
- checkout
- node/install
- restore_cache:
keys:
# when lock file changes, use increasingly general patterns to restore cache
- node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
- node-v1-{{ .Branch }}-
- node-v1-
- run: npm --prefix ./backend/core install
- save_cache:
paths:
- ~/backend/core/usr/local/lib/node_modules # location depends on npm version
key: node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
jobs:
install-node:
executor: core-executor
steps:
- checkout
- node/install
- run: node --version
- run: pwd
- run: ls -A
- run: npm --prefix ./backend/core install
lint:
executor: core-executor
steps:
- init_env
- run: pwd
- run: ls -A
- run: ls backend
- run: ls backend/core -A
- run: npm --prefix ./backend/core run lint
orbs:
node: circleci/node#4.1.0
version: 2.1
workflows:
test_my_app:
jobs:
#- install-node
- lint
#requires:
#- install-node

I think the best thing to do is to use npm ci which is faster. Best explanation of this is here: https://stackoverflow.com/a/53325242/4410223. Even though it will reinstall every time, it is consistent so better than caching. Although when using this, I am unsure what the point of continuing to use cache in your pipeline is, but caching still seems to be recommended with npm ci.
However, the best way to do this is to just use the node orb you already have in your config. A single step of - node/install-packages will do all that work for you. You will be able to replace it with your restore_cache, npm install and save_cache steps. You can even see all the steps it does here: https://circleci.com/developer/orbs/orb/circleci/node#commands-install-packages. Just open the command source and look at the steps on line 71.

Related

Gitlab CI/CD cache expires and therefor build fails

I got AWS CDK application in typescript and pretty simple gitlab CI/CD pipeline with 2 stages, which takes care of the deployment:
image: node:latest
stages:
- dependencies
- deploy
dependencies:
stage: dependencies
only:
refs:
- master
changes:
- package-lock.json
script:
- npm install
- rm -rf node_modules/sharp
- SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp
cache:
key:
files:
- package-lock.json
paths:
- node_modules
policy: push
deploy:
stage: deploy
only:
- master
script:
- npm run deploy
cache:
key:
files:
- package-lock.json
paths:
- node_modules
policy: pull
npm run deploy is just a wrapper for the cdk command.
But for some reason, sometimes it happens, that the cache of the node_modules (probably) expires - simply deploy stage is not able to fetch for it and therefore the deploy stage fails:
Restoring cache
Checking cache for ***-protected...
WARNING: file does not exist
Failed to extract cache
I checked that the cache name is the same as the one built previously in the last pipeline run with dependencies stage.
I suppose it happens, as often times this CI/CD is not running even for multiple weeks, since I contribute to that repo rarely. I was trying to search for the root causes but failed miserably. I pretty much understand that cache can expire after some times(30 days from what I found by default), but I would expect CI/CD to recover from that by running the dependencies stage despite the fact package-lock.json wasn't updated.
So my question is simply "What am I missing? Is my understanding of caching in Gitlab's CI/CD completely wrong? Do I have to turn on some feature switcher?"
Basically my ultimate goal is to skip the building of the node_modules part as often as possible, but not failing on the non-existent cache even if I don't run the pipeline for multiple months.
A cache is only a performance optimization, but is not guaranteed to always work. Your expectation that the cache might be expired is most likely correct, and thus you'll need to have a fallback in your deploy script.
One thing you could do is that you change your dependencies job to:
Always run
Both push & pull the cache
Shortcircuit the job if the cache was found.
E.g. something like this:
dependencies:
stage: dependencies
only:
refs:
- master
changes:
- package-lock.json
script:
- |
if [[ -d node_modules ]]; then
exit 0
fi
- npm install
- rm -rf node_modules/sharp
- SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp
cache:
key:
files:
- package-lock.json
paths:
- node_modules
See also this related question.
If you want to avoid spinning up unnecessary jobs, then you could also consider to merge the dependencies & deploy jobs, and take a similar approach as above in the combined job.

"npm ci" command causing longer build time on gitlab than the "npm i"

Due to certain suggestions received from different articles, we have decided to use the "npm ci" to install the node dependencies from package-lock.json file to avoid breaking changes instead of using the "npm install" script.
But after making this change in .gitlab-ci.yml file, the builds are taking much more time to get install the dependencies. It has been increased from 7 minutes to around 23 minutes.
As per the attached screenshot, it looks like taking more time in removing the existing node_modules folder before installation -
Below are some details from the script file -
image: docker:latest
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
stages:
- test and build
# - documentation-server
- deploy
variables:
GIT_STRATEGY: clone
# ELECTRON_SKIP_BINARY_DOWNLOAD: 1
build:library:
image: trion/ng-cli-karma
stage: test and build
only:
- master
- /^.*/#library_name
script:
- echo _auth=${NPM_TOKEN} >> .npmrc
- mkdir -p dist/core
- cd dist/core
- npm init -y
- cd ../..
- ls -al /hugo
- npm ci
Any help or suggestions would be really helpful to fix this issue.

Gitlab CI node_modules issue

I have two pipelines with two jobs, one job is to install my npm packages, and the other is to bundle and deploy. I have one pipeline that is running on master when I do merge requests and another pipeline that is triggered via webhooks. My merge requests works fine but the webhook does not, and I have noticed the following difference:
In my merge requests my npm package is installing and finding all of the necessary packages / files but in my webhook trigger even though it’s using the same commit and branch it doesn’t seem to be installing all of the packages.
Image with all packages
Image where it seems it’s not installing the same number of packages
Is there a reason why this us happening even though they are in the same branch and using the same commit but one is a merge request while the other is a pipeline trigger? Is there something I am missing? Thanks.
Below is the job that is failing
production_publish:
stage: publish
before_script:
- npm config set registry https://npm.ef.com/
- npm config set //npm.ef.com/:_authToken ${EF_NPM_TOKEN}
script:
- npm install
- npm run bundle
- node ./devops/deployStatic
only:
refs:
- pipelines
- master
variables:
- $NODE_ENV == "production"
except:
refs:
- staging
- pushes
- merge_requests
tags:
- storyblok
- prod
Below is the job that is working fine
install:
stage: install
script:
- npm config set registry https://npm.ef.com/
- npm config set //npm.ef.com/:_authToken ${EF_NPM_TOKEN}
- npm install
cache:
key: ${CI_COMMIT_REF_NAME}-${CI_JOB_NAME}
paths:
- node_modules/
artifacts:
paths:
- node_modules/
expire_in: 1 mos
only:
refs:
- master
except:
refs:
- triggers
- staging
tags:
- storyblok
- prod
e1_id_production_deploy_next_server:
stage: deploy
before_script:
- export COMMIT_TIME=$(git show -s --format=%ct $CI_COMMIT_SHA)
- export COMMIT_TAG=$(git show -s --format=%H $CI_COMMIT_TAG)
- export PRODUCT=$(echo $CI_JOB_NAME | cut -d '_' -f 1)
- export REGION=$(echo $CI_JOB_NAME | cut -d '_' -f 2)
- export NODE_ENV=$(echo $CI_JOB_NAME | cut -d '_' -f 3)
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip ./builds/server_build_$COMMIT_TAG.zip -rq * .[^.]* .next/\* -x out/\* -x .git/\*
- node ./devops/deployServer
only:
refs:
- master
except:
- triggers
tags:
- storyblok
- prod
dependencies:
- install
The main problem again is in the npm install for both cases, for the first one it doesn't seem to be installing all of my packages. Thanks ahead of time for your help
I figured it out, the main issue was in certain packages being set in the devDependencies.
Our staging environment was working fine because NODE_ENV is not set to production
Our master environment that was set in 2 jobs was working because we were actually setting the NODE_ENV after the npm install (in the second job). In the other job the NODE_ENV was being set before npm install and when NODE_ENV=production npm install does not install devDependencies
Fix was to add the packages required from devDependencies as dependencies

Cache files are gone in my GitLab CI pipeline

I'm trying to setup GitLab CI for a mono repository.
For the sake of the argument, lets say I want to process 2 JavaScript packages:
app
cli
I have defined 3 stages:
install
test
build
deploy
Because I'm reusing the files from previous steps, I use the GitLab cache.
My configuration looks like this:
stages:
- install
- test
- build
- deploy
install_app:
stage: install
image: node:8.9
cache:
policy: push
paths:
- app/node_modules
script:
- cd app
- npm install
install_cli:
stage: install
image: node:8.9
cache:
policy: push
paths:
- cli/node_modules
script:
- cd cli
- npm install
test_app:
image: node:8.9
cache:
policy: pull
paths:
- app/node_modules
script:
- cd app
- npm test
test_cli:
image: node:8.9
cache:
policy: pull
paths:
- cli/node_modules
script:
- cd cli
- npm test
build_app:
stage: build
image: node:8.9
cache:
paths:
- app/node_modules
- app/build
script:
- cd app
- npm run build
deploy_app:
stage: deploy
image: registry.gitlab.com/my/gcloud/image
only:
- master
environment:
name: staging
url: https://example.com
cache:
policy: pull
paths:
- app/build
script:
- gcloud app deploy app/build/app.yaml
--verbosity info
--version master
--promote
--stop-previous-version
--quiet
--project "$GOOGLE_CLOUD_PROJECT"
The problem is in the test stage. Most of the time the test_app job fails, because the app/node_modules directory is missing. Sometimes a retry works, but mostly not.
Also, I would like to use two caches for the build_app job. I want to pull app/node_modules and push app/build. I can't find a way to accomplish this. This makes me feel like I don't fully understand how the cache works.
Why are my cache files gone? Do I misunderstand how GitLab CI cache works?
The cache is provided on a best-effort basis, so don't expect that the cache will be always present.
If you have hard dependencies between jobs, use artifacts and dependencies.
Anyway, if it is just for node_modules, I suggest you to install it in every step, instead of using artifacts - you will not save much time with artifacts.

Is it possible to use multiple docker images in bitbucket pipeline?

I have this pipeline file to unittest my project:
image: jameslin/python-test
pipelines:
default:
- step:
script:
- service mysql start
- pip install -r requirements/test.txt
- export DJANGO_CONFIGURATION=Test
- python manage.py test
but is it possible to switch to another docker image to deploy?
image: jameslin/python-deploy
pipelines:
default:
- step:
script:
- ansible-playbook deploy
I cannot seem to find any documentation saying either Yes or No.
You can specify an image for each step. Like that:
pipelines:
default:
- step:
name: Build and test
image: node:8.6
script:
- npm install
- npm test
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
Finally found it:
https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_stepstep(required)
step (required) Defines a build execution unit. Steps are executed in
the order in which they appear in the pipeline. Currently, each
pipeline can have only one step (one for the default pipeline and one
for each branch). You can override the main Docker image by specifying
an image in a step.
I have not found any information saying yes or no either so what I have assumed is that since this image can be configured with all the languages and technology you need I would suggest this method:
Create your docker image with all utilities you need for both default and deployment.
Use the branching method they show in their examples https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_branchesbranches(optional)
Use shell scripts or other scripts to run specific tasks you need and
image: yourusername/your-image
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for master"
- chmod +x your-task-configs.sh #necessary to get shell script to run in BB Pipelines
- ./your-task-configs.sh
feature/*:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for feature/*"
- npm install
- npm install -g grunt-cli
- npm install grunt --save-dev
- grunt build

Resources