Node get project version using GitLab CI - node.js

I am currently in the process of implementing a CI script for a node project. On of the steps involved in this process is to be able to get the project's version from the package.json file and setting it as an env variable (used subsequently for various operations).
What I have tried so far is creating the following job:
version:get-version:
stage: version
script: |
npm pkg get version
version=$(npm pkg get version)
echo "Current version: $version"
echo "PROJECT_VERSION=$version" >> .env
artifacts:
reports:
dotenv: .env
rules:
- if: $CI_COMMIT_BRANCH == "master"
- if: '$CI_COMMIT_BRANCH =~ /^feat.*/'
- if: $CI_COMMIT_TAG
The problem is that when I run the individual commands using the same docker image my CI is using (node:lts-alpine3.16) everything works fine. However when this job runs, it fails with the following error:
Created fresh repository.
18Checking out a3ac42fd as feat/SIS-540-More-CI-Changes...
19Skipping Git submodules setup
20
Executing "step_script" stage of the job script
00:00
21$ npm pkg get version # collapsed multi-line command
22
Uploading artifacts for failed job
00:01
23Uploading artifacts...
24WARNING: .env: no matching files. Ensure that the artifact path is relative to the working directory
25ERROR: No files to upload
26
Cleaning up project directory and file based variables
00:00
27ERROR: Job failed: command terminated with exit code 243
What is even more interesting is that, sometimes the same job would succeed with no problems (at least printing out the version using npm pkg get version). I am honestly stuck and I have no idea how to troubleshoot this or resolve it.
Any hints or ideas are more than welcome.

Related

Pipeline failed

I am trying to run cypress test scripts on Gitlab CICD Pipeline but this error occured
enter image description here
Here is my code on gitlab-ci.yml file
image: docker:18.09
stages:
- test
test:
stage: test
script:
- npm install
- npm run test
Cypress provide some custom docker image to use to avoid dependencies issues. You can check for them here cypress docker images
I also faced many weird issues with the implementation of a cypress job to run in CI. In order to not reinvent the wheel, you can use the cypress run job I shared in this opensource community hu for CI/CD jobs.It's customizable, you just need to include the job url in your pipeline and override some little varaiables, as mentioned in the related documentation of the job.
You should have something like that:
include:
- remote: 'https://api.r2devops.io/job/r/r2devops-bot/cypress_run/latest.yaml'
cypress_run:
variables:
BASE_URL: '<your_server_url>

Coverage badge in gitlab is unknown

I am trying to setup a coverage badge for a python project on GitLab.
I was following this question but it is still not working.
Currently I see in "CI/CD"/jobs page this:
But when I go to Settings/"CI-CD"/General pipelines, the coverage report is still unknown:
This is how I defined coverage run in .gitlab-ci.yml file:
tests:
stage: test
only:
- merge_requests
script:
- pip install poetry
- poetry install
- poetry run coverage run -m pytest
- poetry run coverage report
- poetry run coverage xml
artifacts:
paths: [coverage.xml]
Any ideas what might need to be set differently?
Okay it looks like that I need to add also -main to only part of my .gitlab-ci.yml and then it works. I was just maybe hoping I can have it without running tests twice (before MR to main and after).

Cannot build / run test properly in travis. Permission denied

I have a simple node.js code/project. I write some test with jestjs. I run the test on my local machine with the command 'npm run test'. The test is able to pass.
I moved the project onto GitHub and wanted to tinker with Travis CI. But my build on travis ci fails. The error message on travis is shown below.
Below is the state of my Git repo:
I never committed the folder 'node_modules' into Git because I read somewhere that this folder should be excluded as Travis would have its 'own environment to run and build the project'.
Below is my .travis.yml :
Below is my package.json file:
I tried modifying this file as well. I am still stuck, Should I remove the lines that states the "build" ?
Appreciate any help!
Maybe try adding the following to travis.yml
before_script: chmod 0555 ./node_modules/.bin/travis
I resolved the situation by adding the below:
before_script: chmod 0555 ./node_modules/.bin/jest
I think this is trying to tell the machine running to allow the use of the jest library. Therefore the jests unit test is able to run properly.

Azure DevOps Pipelines run a Node.js script as a step

What's the easiest way to run a Node.js script as a step in Azure DevOps Pipeline (release step)?
I tried a few approaches and it seems like this is not as straightforward as I'd like it to be. I'm looking for a general approach idea.
My last attempt was adding the script to the source repository under tools/script.js. I need to run it in a release pipeline (after build), from where I can't access the repo directly, so I have added the entire repo as a second build artifact. Now I can reach the script file from the release agent, but I haven't found a way to actually run the script, there seems to be no option to run a Node.js script on an agent in general.
Option 1: Visual Pipeline Editor
1) Create the script file you want to run in your build/release steps
You can add the file to your Azure project repository, e.g. under tools/script.js and add any node modules it needs to run to your package.json, for example:
npm install --save-dev <module>
Commit and push, so your changes are online and Azure can see them.
2) Add your repo as an artifact for your release pipeline
You can skip this for build pipelines as they already have access to the repository.
3) Edit your release pipeline to ensure environment
Add a step to make sure correct Node version is on agent (Node.js Tool Installer):
Add a step to install all required node modules (npm):
4) Add your node script step
Use the Bash step to run your node script, make sure the working directory is set to root of the project (where package.json is located):
Option 2: YAML
you have a script\shell step where you can execute custom commands, just use that to achieve the goal. Agent have node installed on them, one thing you might need to do is use the pick node version step to use the right node version for your script
Example:
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
persistCredentials: true
clean: true
- bash: |
curl $BEDROCK_BUILD_SCRIPT > build.sh
chmod +x ./build.sh
displayName: My script download
env:
BEDROCK_BUILD_SCRIPT: https://url/yourscript.sh
- task: ShellScript#2
displayName: My script execution
inputs:
scriptPath: build.sh

How to store node modules between jobs and stages in gitlab with continuous integration

I am fairly new to GitLab CI and I've been trying different approaches to use the node_modules directory in my entire pipeline. From what I've read in the official docs, cache and artifacts seem to be valid approaches to pass on files between jobs:
cache is used to specify a list of files and directories which should
be cached between jobs. You can only use paths that are within the
project workspace.
However, my issue with the caching method is that the node_modules would be persisted between pipelines by default:
cache can be set globally and per-job.
from GitLab 9.0, caching is enabled and shared between pipelines and jobs by default.
I do not want to persist the node_modules between pipelines. What I actually want is to trigger a fresh install with npm in my setup stage and then allow all further jobs in the pipeline to use these modules. Hence, I started using artifacts instead of cache, which is described similarly:
artifacts is used to specify a list of files and directories which
should be attached to the job after success. [...]
The artifacts will be sent to GitLab after the job finishes
successfully and will be available for download in the GitLab UI.
The dependency feature should be used in conjunction with artifacts
and allows you to define the artifacts to pass between different jobs.
The artifact-dependency method seems to be usable in my case. However, both cache and artifacts are extremely inefficient and slow. The node_modules are installed and usable, but the entire directory then gets uploaded somewhere and is re-downloaded between each job. (I would really love to know what happens here... Where do the modules go?)
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime? I do not want to keep the node_modules after all jobs are finished so they don't need to be uploaded or downloaded anywhere.
Sample pipeline configuration file to reproduce the behavior:
image: node:lts
stages:
- setup
- build
- test
node:
stage: setup
script:
- npm install
artifacts:
paths:
- node_modules/
build:
stage: build
script:
- npm run build
dependencies:
- node
test:
stage: test
script:
- npm run lint
- npm run test
dependencies:
- node
Where do the modules go?
By default artifacts are saved on the main gitlab machine:
/var/opt/gitlab/gitlab-rails/shared/artifacts
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime?
There are some options that you can try:
Merge setup and build stages to one stage.
Local npm cache on builder machines. Faster npm install times. Or use private npm proxy registry (for example - Nexus/Artifactory)
Check if gitlab main machine and the builders are in the same network so the upload/download will be faster
Consider packaging your build in docker. You will get reusable docker images between your gitlab stages. (Of course that there is an overhead of uploading the images to docker registry)

Resources