Error: Cannot find module '/app/__sapper__/build' on Cloud Build - node.js

I'm trying to setup an automated Cloud Build for a sapper project that gets deployed to Cloud Run. However I'm getting an error on the deploy.
This is my first attempt at CI work flow so I'm sure there are multiple things I'm doing wrong.
cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=.env.enc
- --plaintext-file=.env
- --location=global
- --keyring=jointcreative
- --key=cloudbuild-env
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$PROJECT_ID", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/$PROJECT_ID"]
- name: "gcr.io/cloud-builders/npm"
args: ["ci", "--production"]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'jointcreative'
- '--image'
- 'gcr.io/$PROJECT_ID/$PROJECT_ID'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- name: "gcr.io/$PROJECT_ID/firebase"
args: ['deploy']
Dockerfile
FROM mhart/alpine-node:12
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=0 /app .
COPY . .
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD ["node", "__sapper__/build"]
Error logs

The reason you get this error is because you don't build the Sapper application with npm run build.
I published a repository with Sapper deployed to Cloud Run a few minutes ago on Github at https://github.com/mikenikles/sapper-on-cloud-run.
The Dockerfile I use is based on 3 stages to minimize the final image size.
# This stage builds the sapper application.
FROM mhart/alpine-node:12 AS build-app
WORKDIR /app
COPY . .
RUN npm install --no-audit --unsafe-perm
RUN npm run build
# This stage installs the runtime dependencies.
FROM mhart/alpine-node:12 AS build-runtime
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production --unsafe-perm
# This stage only needs the compiled Sapper application
# and the runtime dependencies.
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=build-app /app/__sapper__ ./__sapper__
COPY --from=build-app /app/static ./static
COPY --from=build-runtime /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "__sapper__/build"]
I also recommend the following .dockerignore file to copy only what is necessary for Sapper to run:
/*
!/package.json
!/package-lock.json
!/rollup.config.js
!/src
!/static
In your cloudbuild.yaml, you may want to consider adding the following to the Cloud Run deploy script if you plan on exposing the service publicly:
- 'managed'
- '--allow-unauthenticated'

It looks like you're missing the step (which can be placed into your cloudbuild/ci script, or your Dockerfile), to actually build the application.
Sapper uses Rollup or Webpack to bundle your app and places the output in the __sapper__/build directory. The COPY step in your Dockerfile copies this output into your final container.
Try adding a step which runs npm run build into your process, sometime before the Docker image is built.

Related

Cache task in Azure Pipeline to reducer build time

I would like to use Azure Pipeline "Cache Task" to cache npm and later on when I build my docker image based on dockerfile, use that cache to decrease my build time. I do rarely, to almost never change my package.json file nowadays, but the docker image build in Azure Pipeline is very slow. If I take a look at the logs while building etc. based on my dockerfile, the command 'npm install' makes up most of the build time. I've made some research but can't find such case...
This is what I've come up with at the moment:
azure-pipeline.yml
- "test"
resources:
- repo: self
pool:
vmImage: "ubuntu-latest"
variables:
tag: "$(Build.BuildId)"
DOCKER_BUILDKIT: 1
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package.json'
path: "$(npm_config_cache)"
cacheHitVar: "CACHE_RESTORED"
displayName: Cache npm
- script: npm ci
- task: Docker#2
displayName: Build an image
inputs:
command: "build"
Dockerfile: "**/Dockerfile.test"
arguments: "--cache-from=$(npm_config_cache)" <--- I WOULD LIKE TO USE THE CACHE IN THIS TASK
tags: "$(tag)"
Dockerfile
FROM node:19-alpine as build
WORKDIR /app
COPY package.json .
ENV REACT_APP_ENVIRONMENT=test
RUN npm ci --cache $(npm_config_cache) <-- This is not right but I would like to do something like this
COPY . .
RUN npm run build
#Stage 2
FROM nginx:1.23.2-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf *
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
So what I would like to do is the use the cache task in my azure-pipeline in the next step/task of the azure-pipeline where I build my docker image based on the dockerfile provided above. In my Dockerfile I got the line -> RUN npm ci --cache $(npm_config_cache) That somehow I would like to reach it, or is that even possible? I can't really figure it out and maybe I'm totaly wrong about my approach?
Many thanks!

After adding volumes to docker-compose, changes are not being picked up for frontend files

So I have this working as expected with flask where I used...
volumes:
- ./api:/app
And any files that I change in the api are picked up by the running session. I'd like to do the same for the frontend code.
For node/nginx, I used the below configuration. The only way for the file changes to be picked up is if I rebuild. I'd like for file changes to be picked up as they do for python but a bit stuck on why similar set up is not working for src files. Anyone know why this might be happening?
local path structure
public\
src\
Dockerfile.client
docker--compose.yml
docker file...
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/nginx.conf
docker-compose
client:
build:
context: .
dockerfile: Dockerfile.client
volumes:
- ./src:/src
restart: always
ports:
- "80:80"
depends_on:
- api
This is happening because you are building the application.
...
RUN yarn build
...
and them using your build folder:
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
I believe that what you are looking for is a live reload. You can find a good example here.
But basically what you need is a Dockerfile like this:
# Dockerfile
# Pull official Node.js image from Docker Hub
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Expose container port 3000
EXPOSE 3000
# Run "start" script in package.json
CMD ["npm", "start"]
your npm start script:
"start": "nodemon -L server/index.js"
and your volume:
volumes:
- ./api:/usr/src/app/serve

Do I need to add node_modules to .gcloudignore when running "gcloud builds submit ./project-folder"?

This is my project structure:
project/
node_modules/
src/
.gcloudignore
cloudbuild.yaml
Dockerfile
package.json
Here is how I'm building it:
gcloud builds submit ./project --config=./project/cloudbuild.yaml --project=$PROJECT_ID // AND SOME SUBSTITUTIONS
This is my cloudbuild.yaml file:
steps:
# BUILD IMAGE
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "--tag"
- "gcr.io/$PROJECT_ID/$_SERVICE_NAME:$_TAG_NAME"
- "."
timeout: 180s
# PUSH IMAGE TO REGISTRY
- name: "gcr.io/cloud-builders/docker"
args:
- "push"
- "gcr.io/$PROJECT_ID/$_SERVICE_NAME:$_TAG_NAME"
timeout: 180s
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "$_SERVICE_NAME"
- "--image=gcr.io/$PROJECT_ID/$_SERVICE_NAME:$_TAG_NAME"
- "--platform=managed"
- "--region=$_REGION"
- "--port=8080"
- "--allow-unauthenticated"
timeout: 180s
# DOCKER IMAGES TO BE PUSHED TO CONTAINER REGISTRY
images:
- "gcr.io/$PROJECT_ID/$_SERVICE_NAME:$_TAG_NAME"
And here is my Dockerfile:
FROM node:12-slim
WORKDIR /
COPY ./package.json ./package.json
COPY ./package-lock.json ./package-lock.json
COPY ./src ./src
RUN npm ci
From my configuration file, since nothing is being told to copy the node_modules folder, it seems unnecessary to add node_modules to .gcloudignore. But is it?
I'm asking this because I saw this answer that said:
When you run gcloud builds submit... you provide some source code and either a Dockerfile or a configuration file. The former is a simple case of the second, a configuration file containing a single step that runs docker build....
Configuration files (YAML) list a series of container images with parameters that are run in series. Initially Cloud Build copies a designated source (can be the current directory) to a Compute Engine VM (created by the service) as a directory (that's automatically mounted into each container) as /workspace.
If it copies the source, will it copy node_modules as well? Should I add it to .gcloudignore or is it not necessary?
Yes you can skip the node module because you don't use them in your build (and it's huge and long to upload). In your command npm ci, I'm sure you download the dependencies, so, add the node_modules to your .gcloudignore (and .gitignore also)

Install Node dependencies in Debug Container

I am currently setting up a Docker container that will be used to Debug a NodeJS application. This container needs to support live-reloading (using nodemon) and needs to be a Linux container (my workstation is a Windows machine).
My current setup is the following:
Dockerfile.debug
FROM node:current-alpine
VOLUME /app
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production --registry=http://172.16.102.123:8182/repository/npm/
RUN npm install -g nodemon
ENV NODE_ENV=test
EXPOSE 8000
EXPOSE 9229
CMD [ "nodemon", "--inspect=0.0.0.0:9229", "--ignore", "dist/test/**/*.js", "dist/index.js" ]
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile.debug
volumes:
- .:/app
- /app/node_modules
ports:
- 8000:8000
Everything works fine except the dependencies because some of these are plattform specific. That means, it is not possible to simply mount the node_modules directory into the container (like I do with the rest of the codebase). I tried setting up my files in such a way, that the dependencies are different for each platform but I either end up with an empty node_modules directory or with the node_modules directory from the host (the current set up gives me an empty directory). Does anybody know how to fix my problem? I have looked at other solutions (like this one) but they did not work.

Handling node modules with docker-compose

I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db
It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.

Resources