AWS CodeBuild not generating correct artifact files - node.js

I have a monorepo with yarn workspaces and have the following structure
packages
server
... #my-repo/server files
shared
translations
... #my-repo/translations files
validations
... #my-repo/validations files
and I use #my-repo/translations and #my-repo/validations are dependencies of in #my-repo/server.
To build the server app, I use the following buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 14
commands:
- echo Installing Yarn...
- npm install -g yarn
pre_build:
commands:
- echo Installing source NPM dependencies...
- yarn install
- echo Building Shared Packages ...
- yarn buildShared
- echo Listing directories ...
- ls node_modules/#my-repo/*
- echo Installing source NPM dependencies...
- yarn install
build:
commands:
- echo Build started on `date`
- yarn workspace #my-repo/server build
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- '**/*'
discard-paths: no
The yarn buildShared command is a command I created to build all packages that are in the shared folder (simply run yarn workspace #my-repo/foo build for all shared packages in one command)
And one of the steps is to print what is inside node_modules/#my-repo directory and this prints everything as expected (I'm expecting all of my packages to be inside and correctly build, generating a dist folder).
And this CodeBuild create a Build Artifact in S3, but when I download and open the latest created BuildArtifact, I don't see any folder node_modules/#my-repo but I see a node_modules folder with the "normal" (not from yarn workspace) packages.
After try everything I could, I decided to create a temporary directory with every package I need from #my-repo in a temporary folder that I would copy from that and put back in node_modules (I know this is wrong, but I only did this to debug what was happening).
So I added this to my buildspec.yml
- cp -r node_modules/#happyr-health/ ./temp_modules
- ls
And apparently It worked, because the ls listed the temp_modules folder, but again, when downloading the latest build from S3, there was no temp_modules.
I can't figure out why some files are generated in the artifact and others aren't.
This is the second full day I'm trying to figure out "Why aren't the files from node_modules are correctly generated in the S3 bucket?"

Related

How to integrate various services for building a project in GitLab CI/CD?

I have a project that requires npm and gradle for build, and docker for building and pushing the image.
At first I thought that I should create my own ubuntu image with gradle and npm setup, but I found out that is not what docker images are for.
So I hoped to run official Gradle and node images as a service so that my script can call those commands, but that is not happening for some reason.
My .gitlab-ci.yml:
variables:
IMAGE_NAME: my.registry.production/project
IMAGE_TAG: $CI_COMMIT_BRANCH
GIT_SUBMODULE_STRATEGY: recursive
stages:
- build
- deploy
build_project:
stage: build
image: ubuntu:jammy
services:
- name: node:12.20
alias: npm
- name: gradle:6.3.0-jre8
alias: gradle
before_script:
- git submodule init && git submodule update --remote --recursive
script:
- cd project-server && npm install && gradle clean build -Pprod -Pwar -x test -x integrationTest
deploy_image:
stage: deploy
image: docker:20.10.17
services:
- name: docker:20.10.17-dind
alias: docker
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASSWORD my.registry.production
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
If anyone has any info on how to solve this I would greatly appreciate it, since I’m a novice DevOps.
Edit 1:
My Dockerfile for custom image with Gradle and Node installed.
FROM ubuntu:jammy
LABEL key=DevOps
SHELL ["/bin/bash", "--login", "-i", "-c"]
RUN apt update && apt upgrade -y && apt install curl -y
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
RUN source /root/.bashrc && nvm install 12.14.1
RUN nvm install 12.20.0
RUN apt install zip unzip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
RUN sdk install java 8.0.302-open
RUN sdk install gradle 3.4.1
SHELL ["/bin/bash", "--login", "-c"]
CMD [ "bin/bash" ]
After I run it, it says that npm is not found in $PATH, I tried Java, Gradle as well but they weren't found in the path as well.
I don't know why since I installed them as you can tell from the Dockerfile.
As I know, a docker image is equal to one build. So if you have multiple services you need to build each one into docker image then you can encapsulate all images into docker-compose.yml file.
I think you can do the following:
Build the npm project into a docker image
Build the Gradle project into a docker image
Write the docker-compose.yml file and put both images.
Once you have done it, the pipeline calls the docker-compose.yml file.
I hope this will be helpful.
Consider a few suggestions based on the fundamental concepts about the deployment in your CI/CD pipeline:
Remove the services keyword. Reference GitLab's official documents on what the services keyword inside gitlab-ci.yaml file is not for. The feature is used
to provide network accessable services to your job runtime (like
a database): https://docs.gitlab.com/ee/ci/services/index.html
Your project uses npm as a dependency management system, Gradle is
a build tool. Both of these pieces of software are more than
appropriate to run on the host operating system of the container
runtime inside GitLab's Pipeline job. You need these tools to assemble some build artifact as a result of the job on the same host your code has been downloaded on in the Runner.
Think about the overall size of the base image in your build_project job and consider how time to download the image over the network on to the Runner will impact your job and overall pipeline duration. If performance can be improved by baking build dependencies into a custom Dockerfile do this. If your image is too large, instead use shell commands inside the script keyword block to download them at the runtime of the job. There can be pros and cons for both.
Break shell scripts to one command per line for easier troubleshooting of failures in your scripts. You will be able to see the line number of the command which returned a non-zero exit code in your job logs:
...
script:
- cd project-server
- npm install
- gradle clean build -Pprod -Pwar -x test -x integrationTest
...
It's recommended to use the Gradle wrapper (gradlew) most of the time instead of the gradle executable directly. Configure this within your project and check the configuration files for the wrapper into your version control system and this will simplify your build dependency: https://docs.gradle.org/current/userguide/gradle_wrapper.html

Codebuild is unable to find build directory

I am running my buildspec.yml where after npm run build command it should create the build directory in the root path, however, codebuild is unable to find the build directory.I have tried all possible ways from the resources, still I am unable to resolve "no matching base directory path found for build"
PS: I am using codecommit as the source, codebuild & codepipeline to run the deployment steps and S3 bucket to deploy the build directory.
My buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Installing dependencies...
- npm cache clean --force
- npm install
- npm --version
build:
commands:
- aws s3 rm s3://bucketname --recursive
post_build:
commands:
- pwd
- cd src
- npm run build
- ls -la
- aws s3 sync build s3://bucketname
artifacts:
files:
- "**/*"
I had to remove cd src from the post_build stage because my build and then it worked with pipeline, without any error.

Share the dist folder of loopback4

I just build the dist folder of my Loopback 4 API and wish to deploy it on one of my machines.
npm run build && tar -zxvf dist.tar.gz ./dist (Move the dist.tar.gz file on another machine + untar it)
Try to run it: node ./dist/index.js
Get this error: tslib package not found.
Is there something I'm missing?
There is no packages.json in the dist folder then no way to install the dependencies... Should I add a flag or something? (I didn't find any explanation in the documentation)
In line with other TypeScript projects, LoopBack 4 projects requires dist, package.json, package-lock.json to be published.
In production, you can run this command to skip the build process and instead execute the pre-built artifacts:
node .

How to force app engine upload node_modules

In my project we are using nodejs with typescript for google cloud app engine app development. We have our own build mechanism to compile ts files into javascript ,then collect them into a complete runable package, so that we don't want to relay on google cloud to install dependencies, instead we want to upload all node packages inside the node_modules to google cloud.
But it seems google cloud will always ignore the node_modules folder and run npm install during the deployment. Even I tried to remove 'skip_files: - ^node_modules$' from app.yaml, it doesn't work, google cloud will always install packages by itself.
Does anyone have ideas of this of deploy node app with node_modules together? Thank you.
I observed the same issue.
My workaround was to rename node_modules/ to node_modules_hack/ before deploying. This prevents AppEngine from removing it.
I restore it to the original name on installation, with the following (partial) package.json file:
"__comments": [
"TODO: Remove node_modules_hack once AppEngine stops stripping node_modules/"
],
"scripts": {
"install": "mv -fn node_modules_hack node_modules",
"start": "node server.js"
},
You can confirm that AppEngine strips your node_modules/ by looking at the Docker image it generates. You can find it on the Images page. They give you a commandline that you can run on the cloud console to fetch it. Then you can run docker run <image_name> ls to see your directory structure. The image is created after npm install, so once you use the workaround above, you'll see your node_modules/ there.
The newest solution is to allow node_modules in .gcloudignore.
Below's the default .gcloudignore (one that initial execution of gcloud app deploy generates if you don't have one already) with the change you need:
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
# Node.js dependencies:
# node_modules/ # COMMENT OR REMOVE THIS LINE
Allowing node_modules in .gcloudignore no longer works.
App Engine deployment is switched to buildpacks since Oct/Nov 2020. Cloud Build step triggered by it will always remove uploaded node_modules folder and reinstall dependencies using yarn or npm.
Here is the related buildpack code:
https://github.com/GoogleCloudPlatform/buildpacks/blob/89f4a6ba669437a47b482f4928f974d8b3ee666d/cmd/nodejs/yarn/main.go#L60
This is a desirable behaviour since uploaded node_modules could come from a different platform and could break compatibility with Linux runner used to run your app in App Engine environment.
So, in order to skip npm/yarn dependencies installation in Cloud Build, I would suggest to:
Use Linux runner CI with the same Node version you are using in the App Engine environment.
Create tar archive with your node_modules, to not upload multitude of files on each gcloud app deploy.
Keep node_modules dir ignored in .gcloudignore.
Unpack node_modules.tar.gz archive in preinstall script. Don't forget to keep backward compatibility in case tar archive is missing (local development, etc.):
{
"scripts": {
"preinstall": "test -f node_modules.tar.gz && tar -xzf node_modules.tar.gz && rm -f node_modules.tar.gz || true"
}
}
Note ... || true thing. This will ensure preinstall script returns zero exit code no matter what, and yarn/npm install will continue.
Github Actions workflow to pack and upload your dependencies for App Engine deployment could look like this:
deploy-gae:
name: App Engine Deployment
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Preferable to use the same version as in GAE environment
- name: Set Node.js version
uses: actions/setup-node#v2
with:
node-version: '14.15.4'
- name: Save prod dependencies for GAE upload
run: |
yarn install --production=true --frozen-lockfile --non-interactive
tar -czf node_modules.tar.gz node_modules
ls -lah node_modules.tar.gz | awk '{print $5,$9}'
- name: Deploy
run: |
gcloud --quiet app deploy app.yaml --no-promote --version "${GITHUB_ACTOR//[\[\]]/}-${GITHUB_SHA:0:7}"
This is just an expanded version of the initially suggested hack.
Note: In case you have a gcp-build script in your package.json you will need to create two archives (one for production dependencies and one for dev) and modify preinstall script to unpack the one currently needed (depending on the NODE_ENV set by buildpack).

Travis CI: file from node module not found

I just configured Travis CI to also run my test cases. This is my .travis.yml file:
language: node_js
node_js:
- "0.10"
before_install: npm install -g grunt-cli
install: npm install
When it tries to run my test cases, it gives me the following error:
Error loading resource file:///home/travis/build/repo/test/node_modules/mocha/mocha.css (203).
Details: Error opening /home/travis/build/repo/test/node_modules/mocha/mocha.css: No such file or directory
Error loading resource file:///home/travis/build/repo/test/node_modules/mocha/mocha.js (203).
Details: Error opening /home/travis/build/repo/test/node_modules/mocha/mocha.js: No such file or directory
TypeError: 'undefined' is not a function (evaluating 'mocha.setup('bdd')')
So it can not find the mocha.css and mocha.js file in the node_modules folder.
I'm guessing it can not find these files because they are not uploaded to Git. This is because I specified node_modules in my .gitignore file, because I do not want to upload all the modules.
What is the common way/a clean way to fix this problem?
The simplest way to get out of this situation is
$ git add --force node_modules/mocha.css node_modules/mocha.js
$ git commit -m "Add the missing files"
$ git push
The "--force" option allows you to add files which are being ignored via .gitignore

Resources