I am new to node.js, I was trying to deploy node.Js project via gitlab ci. But after spotting the build error in pipeline I realized I added node_modules folder to .gitignore, and I am not pushing node_modules to gitlab. And node_modules folder is 889MB locally there is no way I will push it, so what approach should I use to use node_module s folder from somewhere else.
i.e. node_modules path is always present and accessible on remote server! Do I need to include that path in package. Json
Can node_modules be maintained by using docker ? then how would I maintain to stay update specific to every project.
You are right not to check in the node_modules folder, they are automatically populated at the time you run npm install
This should be part of your build pipeline in the gitlab ci. The pipeline allows multiple steps and the ability to pass artefacts through to the next stage. In your case you want to save the node_modules folder that is created by running npm install you can then use the dependencies for tests or deployment.
Since npm v5 there is a lockfile to make sure what you are running locally will be the same as what you are running on the server
Also you can use something like rennovate to automatically update your dependancies if you want to fix them and automatically manage security updates. (rennovate is open source so can be ran on gitlab)
A really simple gitlab CI pipeline could be:
// .gitlab-ci.yml
stages:
- build
- deploy
build:
stage: build
script:
- npm install
artifacts:
name: "${CI_BUILD_REF}"
expire_in: 10 mins
paths:
- node_modules
deploy:
stage: deploy
script:
- some deploy command
Related
I am a bit new to azure and today I am trying to create a pipeline for publishing npm package to azure artifactory.
The issue is - that after pipeline successfully built, I can see the published package in the artifacts. However, published package is almost empty.
There is only package.json and readme.md. No dist folder at all.
Here is my Pipeline:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
npm publish
displayName: 'npm install and build and publish'
Also, when I build the project locally and run npm publish - the package is published as it should,all files in place.
Is there is something I am doing wrong ?
Finally I found the issue.
The Pipeline definition was actually right, besides one little thing:
versionSpec: '10.x'
Version of the Node was incorrect! The pretty old one. Originally the definition was copied from one of the azure official manuals, so the version was from some really old year.
versionSpec: '14.x'
And build was successful with all files on their place.
Hope that will be helpful for somebody here.
when publishing packages to npm, you need to authenticate with your credential. You could run it successfully on local because of the .npmrc file saved on your computer. When running npm publish on CI, the file doesn't exist, which results in an error. Please try the following steps:
Generate an automation access token from this URL: https://www.npmjs.com/
Go to your repo, and add a file named ".npmrc", enter the content with //registry.npmjs.org/:_authToken={your-token-value}
Notice:
It is recommended to set the access token as an environment variable in Pipeline library.
Please use lowercase words as the package's name in package.json. Otherwise you will receive a 400 error.
I am trying to run a CI in gitlab
image: node:latest
stages:
- deploy
production:
stage: deploy
before_script:
- npm config set prefix /usr/local
- npm install -g serverless
script:
- serverless deploy
I am using the docker image like they suggest but it cannot find npm (or node)
How can I get this working?
Well, this is a bit weird, as your ci is correct.
If you are just using gitlab.com and their shared runners then this .gitlab-ci.yml will work.
One possible reason could be you have runners added as ssh/shell executors in the project repo. If so then the image tag you specified will be simply ignored.
So error like command not found could occur because of the server where you have added the runner doesn't have nodejs installed, and this error will occur for the npm config... command in before script with exit code 127 and pipeline will fail just there and stop.
If you have multiple runners then tag them and tag your jobs in ci.yml as well.
And if you are trying to run the job on your own server then you got to install docker first.
BTW for docker image node:latest you don't need npm config set prefix /usr/local as it already is /usr/local
I'm using Gitlab CI in order to implement CI for my Node.js app. I'm already using artifacts and sharing the dependencies between jobs, however, I would like to make it faster. Every time a pipeline starts, it installs the dependencies during the first job and I'm thinking to prevent this by having all dependencies in a Docker image and pass that image to test & production stages. However, I have been unable to do so. Apparently Gitlab doesn't run the code inside my image's WORKDIR.
Following is my Dockerfile:
FROM node:6.13-alpine
WORKDIR /home/app
COPY package.json .
RUN npm install
CMD [“sh”]
And following is my gitlab-ci.yml:
test:
image: azarboon/dependencies-test
stage: test
script:
— pwd
— npm run test
Looking at logs, pwd results in /builds/anderson-martin/lambda-test, which is different from the defined WORKDIR and also installed dependencies are not found. Do you have any recommendation for me how can I Dockerize my dependencies and speed up the build stage?
Probably the easiest way to solve your issue is to symlink the node_modules folder from your base image into the gitlab CI workspace like this:
test:
image: azarboon/dependencies-test
stage: test
script:
— ln -s /home/app/node_modules ./node_modules
— npm run test
The syntax for symlinking is ln -s EXISTING_FILE_OR_DIRECTORY SYMLINK_NAME.
Please note that /home/app/ is the workspace which you´re using in your base image.
Gitlab also provides other functionality to share dependencies. On the one hand you have caching and on the other job artifacts.
I have this bitbucket-pipelines.yml, is there any way to copy the build that is created by npm run buid into my repository?
image: node:6.9.4
pipelines:
branches:
master:
- step:
caches:
- node
script:
- npm install
- npm run build
If you mean saving the build as a Download in your Bitbucket repository, then we have a guide on how to do it via the Bitbucket API. The basic steps are:
Create an app password for the repository owner
Create a Pipelines environment variable with the authentication token
Upload your artifacts to Bitbucket Downloads using curl and the Bitbucket REST API
The details of how to do this are covered in the guide.
If you mean committing the build back to the Git repository, we wouldn't recommend that. Storing build output in Git isn't ideal - you should use BB downloads or an npm registry for that. But if you really want to, you can do it by following the guide above to create an app password, then pass it as an environment variable into Pipelines, set it in a HTTPS git remote, then use git push to upload it back to Bitbucket.
In my project we are using nodejs with typescript for google cloud app engine app development. We have our own build mechanism to compile ts files into javascript ,then collect them into a complete runable package, so that we don't want to relay on google cloud to install dependencies, instead we want to upload all node packages inside the node_modules to google cloud.
But it seems google cloud will always ignore the node_modules folder and run npm install during the deployment. Even I tried to remove 'skip_files: - ^node_modules$' from app.yaml, it doesn't work, google cloud will always install packages by itself.
Does anyone have ideas of this of deploy node app with node_modules together? Thank you.
I observed the same issue.
My workaround was to rename node_modules/ to node_modules_hack/ before deploying. This prevents AppEngine from removing it.
I restore it to the original name on installation, with the following (partial) package.json file:
"__comments": [
"TODO: Remove node_modules_hack once AppEngine stops stripping node_modules/"
],
"scripts": {
"install": "mv -fn node_modules_hack node_modules",
"start": "node server.js"
},
You can confirm that AppEngine strips your node_modules/ by looking at the Docker image it generates. You can find it on the Images page. They give you a commandline that you can run on the cloud console to fetch it. Then you can run docker run <image_name> ls to see your directory structure. The image is created after npm install, so once you use the workaround above, you'll see your node_modules/ there.
The newest solution is to allow node_modules in .gcloudignore.
Below's the default .gcloudignore (one that initial execution of gcloud app deploy generates if you don't have one already) with the change you need:
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
# Node.js dependencies:
# node_modules/ # COMMENT OR REMOVE THIS LINE
Allowing node_modules in .gcloudignore no longer works.
App Engine deployment is switched to buildpacks since Oct/Nov 2020. Cloud Build step triggered by it will always remove uploaded node_modules folder and reinstall dependencies using yarn or npm.
Here is the related buildpack code:
https://github.com/GoogleCloudPlatform/buildpacks/blob/89f4a6ba669437a47b482f4928f974d8b3ee666d/cmd/nodejs/yarn/main.go#L60
This is a desirable behaviour since uploaded node_modules could come from a different platform and could break compatibility with Linux runner used to run your app in App Engine environment.
So, in order to skip npm/yarn dependencies installation in Cloud Build, I would suggest to:
Use Linux runner CI with the same Node version you are using in the App Engine environment.
Create tar archive with your node_modules, to not upload multitude of files on each gcloud app deploy.
Keep node_modules dir ignored in .gcloudignore.
Unpack node_modules.tar.gz archive in preinstall script. Don't forget to keep backward compatibility in case tar archive is missing (local development, etc.):
{
"scripts": {
"preinstall": "test -f node_modules.tar.gz && tar -xzf node_modules.tar.gz && rm -f node_modules.tar.gz || true"
}
}
Note ... || true thing. This will ensure preinstall script returns zero exit code no matter what, and yarn/npm install will continue.
Github Actions workflow to pack and upload your dependencies for App Engine deployment could look like this:
deploy-gae:
name: App Engine Deployment
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Preferable to use the same version as in GAE environment
- name: Set Node.js version
uses: actions/setup-node#v2
with:
node-version: '14.15.4'
- name: Save prod dependencies for GAE upload
run: |
yarn install --production=true --frozen-lockfile --non-interactive
tar -czf node_modules.tar.gz node_modules
ls -lah node_modules.tar.gz | awk '{print $5,$9}'
- name: Deploy
run: |
gcloud --quiet app deploy app.yaml --no-promote --version "${GITHUB_ACTOR//[\[\]]/}-${GITHUB_SHA:0:7}"
This is just an expanded version of the initially suggested hack.
Note: In case you have a gcp-build script in your package.json you will need to create two archives (one for production dependencies and one for dev) and modify preinstall script to unpack the one currently needed (depending on the NODE_ENV set by buildpack).