GitLab cache key: files - file does not exist - node.js

I have a short pipeline. And it constantly fails with not being able to find the cache:
node:
stage: Install
cache:
- key:
files:
- package.json
- package-lock.json
prefix: node
paths: [node_modules]
- key: npm
paths: [.npm]
rules:
- changes:
- package.json
- package-lock.json
script:
- npm i
mocha:
stage: Test
script:
- npm test
cache:
- key:
files:
- package.json
- package-lock.json
prefix: node
paths: [ node_modules ]
policy: pull
This pipeline run well on Branch 1
And on Branch 2, the node job skipped, as expected, however, job mocha failed with
Checking cache for node-313ff968911abee510931abad7ccd29ed21954b5-17-non_protected...
WARNING: file does not exist
Failed to extract cache
This is strange because it should use cache from the run of Branch 1 pipeline.
I use shared runners with Merge Pipeline if it's important.

even though this is an old question asked but it might save someone else day for using cache in different branches, what I understand your cache is working as expected in feature which is probably non-proctected branch and when you're trying to create a merge-request to merge your changes to a protected branch probably dev/main.
Basically protected and non-protected branches don't share cache in Gitlab CI by default as mentioned in their docs.
By default, protected and non-protected branches do not share the cache. However, you can change this behavior.
https://docs.gitlab.com/ee/ci/caching/
Use the same cache for all branches
Introduced in GitLab 15.0.
If you do not want to use cache key names, you can have all branches (protected and unprotected) use the same cache.
The cache separation with cache key names is a security feature and should only be disabled in an environment where all users with Developer role are highly trusted.
To use the same cache for all branches:
On the top bar, select Main menu > Projects and find your project.
On the left sidebar, select Settings > CI/CD.
Expand General pipelines.
Clear the Use separate caches for protected branches checkbox.
Select Save changes.

Related

How to speed up Gitlab CI job with cache and artifacts

I wish Gitlab test job for my project on Rust run faster.
Locally it rebuilds pretty fast, but in gitlab job, avery build runs slow as a first one.
Looking for a way to use artifacts or cache from previous pipline to speed up rust test and build processe.
# .gitlab-ci.yml
stages:
- test
test:
stage: test
image: rust:latest
script:
- cargo test
Gitlab CI/CD supports caching between CI jobs using the cache key in your .gitlab-ci.yml. It is only able to cache files in the project directory so you need to use the environment variable CARGO_HOME if you also want to cache the cargo registry.
You can add a cache setting at the top level to setup a cache for all jobs that don't have a cache setting themselves and you can add it below a job definition to setup a cache configuration for this kind of job.
See the keyword reference for all possible configuration options.
Here is one example configuration that caches the cargo registry and the temporary build files and configures the clippy job to only use the cache but not write to it:
stages:
- test
cache: &global_cache # Default cache configuration with YAML variable
# `global_cache` pointing to this block
key: ${CI_COMMIT_REF_SLUG} # Share cache between all jobs on one branch/tag
paths: # Paths to cache
- .cargo/bin
- .cargo/registry/index
- .cargo/registry/cache
- target/debug/deps
- target/debug/build
policy: pull-push # All jobs not setup otherwise pull from
# and push to the cache
variables:
CARGO_HOME: ${CI_PROJECT_DIR}/.cargo # Move cargo data into the project
# directory so it can be cached
# ...
test:
stage: test
image: rust:latest
script:
- cargo test
# only for demonstration, you can remove this block if not needed
clippy:
stage: test
image: rust:latest
script:
- cargo clippy # ...
only:
- master
needs: []
cache:
<<: *global_cache # Inherit the cache configuration `&global_cache`
policy: pull # But only pull from the cache, don't push changes to it
If you want to use cargo publish from CI, you should then add .cargo to your .gitignore file. Otherwise cargo publish will show an error that there is an uncommitted directory .cargo in your project directory.

Gitlab CI/CD cache expires and therefor build fails

I got AWS CDK application in typescript and pretty simple gitlab CI/CD pipeline with 2 stages, which takes care of the deployment:
image: node:latest
stages:
- dependencies
- deploy
dependencies:
stage: dependencies
only:
refs:
- master
changes:
- package-lock.json
script:
- npm install
- rm -rf node_modules/sharp
- SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp
cache:
key:
files:
- package-lock.json
paths:
- node_modules
policy: push
deploy:
stage: deploy
only:
- master
script:
- npm run deploy
cache:
key:
files:
- package-lock.json
paths:
- node_modules
policy: pull
npm run deploy is just a wrapper for the cdk command.
But for some reason, sometimes it happens, that the cache of the node_modules (probably) expires - simply deploy stage is not able to fetch for it and therefore the deploy stage fails:
Restoring cache
Checking cache for ***-protected...
WARNING: file does not exist
Failed to extract cache
I checked that the cache name is the same as the one built previously in the last pipeline run with dependencies stage.
I suppose it happens, as often times this CI/CD is not running even for multiple weeks, since I contribute to that repo rarely. I was trying to search for the root causes but failed miserably. I pretty much understand that cache can expire after some times(30 days from what I found by default), but I would expect CI/CD to recover from that by running the dependencies stage despite the fact package-lock.json wasn't updated.
So my question is simply "What am I missing? Is my understanding of caching in Gitlab's CI/CD completely wrong? Do I have to turn on some feature switcher?"
Basically my ultimate goal is to skip the building of the node_modules part as often as possible, but not failing on the non-existent cache even if I don't run the pipeline for multiple months.
A cache is only a performance optimization, but is not guaranteed to always work. Your expectation that the cache might be expired is most likely correct, and thus you'll need to have a fallback in your deploy script.
One thing you could do is that you change your dependencies job to:
Always run
Both push & pull the cache
Shortcircuit the job if the cache was found.
E.g. something like this:
dependencies:
stage: dependencies
only:
refs:
- master
changes:
- package-lock.json
script:
- |
if [[ -d node_modules ]]; then
exit 0
fi
- npm install
- rm -rf node_modules/sharp
- SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm install --arch=x64 --platform=linux --libc=glibc sharp
cache:
key:
files:
- package-lock.json
paths:
- node_modules
See also this related question.
If you want to avoid spinning up unnecessary jobs, then you could also consider to merge the dependencies & deploy jobs, and take a similar approach as above in the combined job.

Gitlab-CI avoid unnecessary rebuilds of react portion of project

I have a stage in my CI pipeline (gitlab-ci) as follows:
build_node:
stage: Build Prerequisites
only:
- staging
- production
- ci
image: node:15.5.0
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- ui/node_modules
script:
- cd ui
- yarn install --network-timeout 600000
- CI=false yarn build
- mv build ../http
The UI however, is not the only part of the project. There are other files with their own build processes. So whenever we commit changes for only those other files, this stage gets rerun every time, even if nothing in the ui folder changed.
Is there a way to have gitlab cache or otherwise not rebuild this every time if there were no changes? Any changes that should trigger a rebuild would all be under the ui folder. Just have it use the older build if possible?
It is possible to do in latest Gitlab version using the rules:changes keyword.
rules:
- changes:
- ui/*
Link: https://docs.gitlab.com/ee/ci/jobs/job_control.html#variables-in-ruleschanges
This will only check for changes inside the ui folder and trigger this stage.
Check this link for more info: https://docs.gitlab.com/ee/ci/yaml/#ruleschanges

Why does GitLab Ci not find my cached folder?

I have a list of CI jobs running in my GitLab and the Caching does not work as expected:
This is how my docu-generation job ends:
[09:19:33] Documentation generated in ./documentation/ in 4.397 seconds using gitbook theme
Creating cache angular...
00:02
WARNING: frontend/node_modules: no matching files
frontend/documentation: found 136 matching files
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Job succeeded
I then start a deployment Job (to GitLab Pages) but it fails because it doesn't find the documentation-folder:
$ cp -r frontend/documentation .public/frontend
cp: cannot stat 'frontend/documentation': No such file or directory
this is the cache config of the generation:
generate_docu_frontend:
image: node:12.19.0
stage: build
cache:
key: angular
paths:
- frontend/node_modules
- frontend/documentation
needs: ["download_angular"]
and this is for deployment:
deploy_documentation:
stage: deploy
cache:
- key: angular
paths:
- frontend/node_modules
- frontend/documentation
policy: pull
- key: laravel
paths:
- backend/vendor
- backend/public/docs
policy: pull
does anyone know why my documentation folder is missing?
The message in your job output No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally. just means that your runners are not using Amazon S3 to store your cache, or something similar like Minio.
Without S3/Minio, the cache only lives on the runner that first ran the job and cached the resources. This means that the next time the job runs and it happens to be picked up by a different runner, it won't have the cache. In that case, you'd run into an error like this.
There's a couple ways around this:
Configure your runners to use S3/Minio (Minio has an open source, free-to-use license if you're interested in hosting it yourself).
Only use one runner (not a great solution since generally more runners means faster pipelines and this would slow things down considerably, though it would solve the cache problem).
Use tags. Tags are used to ensure that a job runs on a specific runner(s). Let's say for example that 1 out of your 10 runners have access to your production servers, but all have access to your lower environment servers. Your lower-env jobs can run on any runner, but your Production Deployment job has to run on the one runner with prod access. You can do this by putting a Tag on the runner called let's say prod-access and putting the same tag on the prod deploy job. This will ensure that job will run on the runner with prod access. The same thing can be used here to ensure the cache is available.
Use artifacts instead of cache. I'll explain this option below as it's really what you should be using for this use case.
Let's briefly explain the difference between Cache and Artifacts:
Cache is generally best used for dependency installation like npm or composer (for PHP projects). When you have a job that runs npm ci or composer install, you don't want it to run every since time your pipeline runs when you don't necessary change the dependencies as it wastes time. Use the cache keyword to cache the dependencies so that subsequent pipelines don't have to install the dependencies again.
Artifacts are best used when you need to share files or directories between jobs in the same pipeline. For example, after installing npm dependencies, you might need to use the node_modules directory in another job in the pipeline. Artifacts are also uploaded to the GitLab server by the runner at the end of the job, opposed to being stored locally on the runner that ran the job. All previous artifacts will be downloaded for all subsequent jobs, unless controlled with either dependencies or needs.
Artifacts are the better choice for your use case.
Let's update your .gitlab-ci.yml file to use artifacts instead of cache:
stages:
- build
- deploy
generate_docu_frontend:
image: node:12.19.0
stage: build
script:
- ./generate_docs.sh # this is just a representation of whatever steps you run to generate the docs
artifacts:
paths:
- frontend/node_modules
- frontend/documentation
expire_in: 6 hours # your GitLab instance will have a default, you can override it like this
when: on_success # don't attempt to upload the docs if generating them failed
deploy_documentation:
stage: deploy
script:
- ls # just an example showing that frontend/node_modules and frontend/documentation are present
- deploy.sh # whatever else you need to run this job

GitLab CI caching key

Say I have the following step in my .gitlab-ci.yml file:
setup_vue:
image: ....
stage: setup
script:
- cd vue/
- npm install --no-audit
cache:
key: node-cache
paths:
- vue/node-modules/
I see;
Checking cache for node-cache-1...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
And after the script runs:
Creating cache node-cache-1...
Created cache
WARNING: vue/node-modules/: no matching files
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Job succeeded
When I try to get the cache on the next step like so:
test_vue:
image: ....
stage: test
cache:
key: node-cache
script:
- cd docker-hotreload-vue
- cqc src
- npm test
It doesnt try to retrieve any cache, and just tries to run the script (which fails obviously). According to the GitLab docs this is the correct way to do this. (I'm using a docker runner)
Here's the output I get:
Fetching changes...
fatal: remote origin already exists.
Removing vue/node_modules/
HEAD is now at ....
Checking out ...
Skipping Git submodules setup
$ cd docker-hotreload-vue
$ cqc src
I am using tags to ensure the same runner is executing the jobs.
Try updating your key to the below:
cache:
key: ${CI_COMMIT_REF_SLUG}
This solved my problem. I had 3 stages - build, test, package. Without the key set to ${CI_COMMIT_REF_SLUG}, the cache only worked for test stage. After updating the key, now the package stage can also extract the cache properly.

Resources