gcloud App Engine Flexible Strangeness with Docker and Babel - node.js

I've been deploying a server side node application to a custom app engine runtime for a few months without any problems. The only half interesting thing about it is that a run babel against the source when I build the container.
In this last few weeks this has been failing intermittently with an error to this effect in the remote build log.
import * as deps from './AppFactory';
SyntaxError: Unexpected token import
Leading me to believe that the babel transpilation wasn't happening; though the gcloud cli indicates it is:
> node_modules/babel-cli/bin/babel.js src/ -d dist/
src/AppFactory.js -> dist/AppFactory.js
src/Ddl.js -> dist/Ddl.js
src/Helpers.js -> dist/Helpers.js
src/MemoryResolver.js -> dist/MemoryResolver.js
src/Mysql.js -> dist/Mysql.js
src/Schema.js -> dist/Schema.js
src/index.js -> dist/index.js
---> 0282c805d5c9
In desperation I cat out the dist/index file in the Dockerfile. When I do, I see that indeed no transpilation occurs.
When I create a docker image locally, everything works perfectly.
My Dockerfile follows:
# Set the base image to Ubuntu
FROM gcr.io/google_appengine/nodejs:latest
ENV NODE_ENV production
# File Author / Maintainer
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
# Define working directory
WORKDIR /src
ADD . /src
RUN npm run deploy
RUN cat /src/dist/index.js
CMD ["npm", "start"]
Below is my .babelrc file:
{
"presets": [
"es2015",
]
}
And my vanilla yaml file:
service: metrics-api-test
runtime: custom
env: flex
env_variables:
NODE_ENV: 'production'
NODEPORT: '8080'
beta_settings:
cloud_sql_instances: pwc-sales-demos:us-east1:pawc-sales-demos-sql
I've been trying all sorts of variations with babel-register, babel-node. They all work perfectly when I build a local docker image. They all fail when I deploy to the app engine.
I posted this a few months ago and the issue is starting to plague me again. It started off as an intermittent problem and now it happens every time. It happens between services and even on different gcloud projects.
Any insight into this gets my appreciation and 150 points.

So finally getting back to this; it was completely my fault.
I had thought that I had moved all the babel dependencies into the runtime dependency strophe, like so:
"dependencies": {
"babel-cli": "^6.24.1",
"babel-preset-es2015": "^6.24.1"....
But I must have not. All works perfectly with the above and this Dockerfile:
FROM gcr.io/google_appengine/nodejs:latest
ENV NODE_ENV production
# File Author / Maintainer
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /src && cp -a /tmp/node_modules /src/
# Define working directory
WORKDIR /src
ADD . /src
RUN node_modules/babel-cli/bin/babel.js src/ -d dist/
RUN cat dist/index.js
CMD ["npm", "start"]
No more manually building the file!

Related

Start React server container without Nginx [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
I need advice from someone who really understands React app dockerization.
I will be as brief as I can.
We have 3 containers now -- DB, backend, frontend.
Our Frontend Dockerfile is below:
FROM node:16-buster-slim as builder
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
FROM nginx:1.21.1-alpine
RUN rm -rf /etc/nginx/conf.d
COPY localconf /etc/nginx
COPY --from=builder /app/build/ /usr/share/nginx/html/
WORKDIR /usr/share/nginx/html
COPY ./env-config.* ./
COPY ./env.sh .
RUN chmod +x env.sh
RUN apk add --no-cache bash openssl
RUN chmod +x env.sh
CMD ["/bin/bash", "-c", "env && /usr/share/nginx/html/env.sh && nginx -g \"daemon off;\""]
The whole problem is that developers can't track changes in real time with this container. They run frontend on the host machine with yarn build && yarn start and when the changes are stable and ready, they build the container.
Now I need help to investigate why new container is not working.
I have reduced the Dockerfile to the following :
FROM node:16-buster-slim
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
And to watch changes in real time I added volume at docker-compose.yaml file:
frontend:
container_name: frontend
build:
context: ../../client/project
dockerfile: local.dockerfile
ports:
- "80:80"
- "443:443"
command: yarn start
env_file:
- ../../client/project/.env
volumes:
- ../../client/project:/app/
- /app/node_modules
restart: on-failure
depends_on:
- backend
I don't know why, but application doesn't respond on any request and refuse any connection to localhost:443, where it works fine with Nginx.
So, please, could you tell some best-practices to Deckerize React App without Nginx to check real-time changes?
I know for sure it is not an unusual task, but I didn't find anything with Docs or Google.
When the "yarn start" command is launched, it refers to your "package.json" -> "script" -> "start"
I suggest you install the "nodemon" package like this:
yarn add nodemon
nodemon allows the server to be restarted each time the files are updated
Create the following line in your package.json:
{
"scripts": {
"dev": "nodemon ./bin/www.js",
...
}
...
}
To replace "./bin/www.js" by the entry point of your application
In your Dockerfile change
CMD["yarn", "start"]
by
CMD["yarn", "dev"]
Which would give:
FROM node:16-buster-slim
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
CMD ["yarn", "dev"]

Running Jest in --watch mode in a Docker container in a subdirectory of git root

I have built a web project with the following file structure:
-root_folder/
-docker-compose.yml
-.git/
-backend/
-.dockerignore
-docker/
-dev.dockerfile
-frontend/
-.dockerignore
-docker/
-dev.dockerfile
I run the frontend app (Angular) in a Docker container. I also run the backend app (ExpressJS) in another container but the backend is not relevant to my problem.
I have mounted the volume ./frontend to /app in the container in order to allow hot reloads.
This configuration works to run Angular just fine. However, when running Jest with the --watch flag it gives the error --watch is not supported without git/hg, please use --watchAll
I went back into the dockerfile and added:
RUN apk update -q && \
apk add -q git
But this doesn't fix the problem. From all the research I've done, it seems that the issue is that Jest watch mode uses git somehow to detect changes, but my git folder is not in the 'frontend' subdirectory.
I tried to modify my container to copy all the files to /app/frontend instead and then also copy in and mount .git folder to /app/.git but that had no effect.
I do not want to run Jest with --watchAll (but I tested it and that does run properly). Any suggestions?
EDIT Answered my own question. I was on the right track with mounting the .git folder. The missing step was setting GIT_WORK_TREE and GIT_DIR environment variables.
I was able to get this working exactly as I wanted. The problem is that in order for Jest to run in watch mode, it does so by looking at the changed files according to Git. I was able to get this functionality working by setting up the directory structure on the container similar to my host system with:
-app/
-.git/
-frontend/
Then, most importantly, setting the GIT_WORK_TREE and GIT_DIR environment variables.
Here is my dockerfile:
FROM node:alpine3.11 as dev
WORKDIR /app/frontend
# To use packages in CLI without global install
ENV PATH /app/frontend/node_modules/.bin:$PATH
COPY . .
RUN npm install --silent
EXPOSE 4200
CMD ["/bin/sh", "-c", "npm run start:dev"]
##########################################################
FROM dev as unit-test
ENV GIT_WORK_TREE=/app/frontend GIT_DIR=/app/.git
RUN apk update && \
apk add git
CMD ["/bin/sh", "-c", "jest --watch"]
Without the env vars set, Jest continues to give the error that it can't work without git. I'm assuming it's because git init was never ran and it probably does some other stuff behind the scenes that copying in the .git folder doesn't accomplish.
Here is the docker-compose I used for the test service in case it helps someone:
f-test-unit:
container_name: "f-test-unit"
build:
context: "frontend"
dockerfile: "docker/dev.dockerfile"
target: "unit-test"
volumes:
- "./frontend:/app/frontend"
- "/app/frontend/node_modules/"
- "./.git:/app/.git"
tty: true
stdin_open: false
Side note: if you add the tty and stdin_open lines, it allows for your logs in the docker container to be colorized, which is very useful with Jest.

How can I run an npm command in a docker container?

I am trying to run an angular application in development mode inside a docker container, but when i run it with docker-compose build it works correctly but when i try to put up the container i obtain the below error:
ERROR: for sypgod Cannot start service sypgod: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"npm\": executable file not found in $PATH
The real problem is that it doesn't recognize the command npm serve, but why??
The setup would be below:
Docker container (Nginx Reverse proxy -> Angular running in port 4000)
I know that there are better ways of deploying this but at this moment I need this setup for some personals reasons
Dockerfile:
FROM node:10.9
COPY package.json package-lock.json ./
RUN npm ci && mkdir /angular && mv ./node_modules ./angular
WORKDIR /angular
RUN npm install -g #angular/cli
COPY . .
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
CMD ["npm", "serve", "--port 4000"]
NginxServerSite
server{
listen 80;
server_name sypgod;
location / {
proxy_read_timeout 5m;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:4000/;
}
}
Docker Compose file(the important part where I have the problem)
sypgod: # The name of the service
container_name: sypgod # Container name
build:
context: ../angular
dockerfile: Dockerfile # Location of our Dockerfile
The image that's finally getting run is this:
FROM nginx:alpine
COPY toborFront.conf /etc/nginx/conf.d/
EXPOSE 8080
CMD ["npm", "serve", "--port 4000"]
The first stage doesn't have any effect (you could COPY --from=... files out of it), and if there are multiple CMDs, only the last one has an effect. Since you're running this in a plain nginx image, there's no npm command, leading to the error you see.
I'd recommend using Node on the host for a live development environment. When you've built and tested your application and are looking to deploy it, then use Docker if that's appropriate. In your Dockerfile, run ng build in the first stage to compile the application to static files, add a COPY --from=... in the second stage to get the built application into the Nginx image, and delete all the CMD lines (nginx has an appropriate default CMD). #VikramJakhar's answer has a more complete Dockerfile showing this.
It looks like you might be trying to run both Nginx and the Angular development server in Docker. If that's your goal, you need to run these in two separate containers. To do this:
Split this Dockerfile into two. Put the CMD ["npm", "serve"] line at the end of the first (Angular-only) Dockerfile.
Add a second block in the docker-compose.yml file to run the second container. The backend npm serve container doesn't need to publish ports:.
Change the host name of the backend server in the Nginx config from localhost to the Docker Compose name of the other container.
It would appear the npm can't be accessed from the container.
Try defining where it tries to execute it from:
docker run -v "$PWD":/usr/src/app -w /usr/src/app node:10.9 npm serve --port 4000
source: https://gist.github.com/ArtemGordinsky/b79ea473e8bc6f67943b
Also make sure that npm is installed on the computer running the docker container.
You can do something like below
### STAGE 1: Build ###
# We label our stage as ‘builder’
FROM node:alpine as builder
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++
RUN mkdir -p /ng-app/dist
WORKDIR /ng-app
COPY package.json package-lock.json ./
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm install
COPY . .
## Build the angular app in production mode and store the artifacts in dist folder
RUN npm run ng build -- --prod --output-path=dist
### STAGE 2: Setup ###
FROM nginx:1.14.1-alpine
## Copy our default nginx config
COPY toborFront.conf /etc/nginx/conf.d/
## Remove default nginx website
RUN rm -rf "/usr/share/nginx/html/*"
## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
If you have Portainer.io installed for managing your Docker setup, you can open the console for a particular container from a browser.
This is useful if you want to run a reference command like "npm list" to show what versions of dependencies have been loaded.
So that you can view it like this:
I found this useful for diagnosing issues where an update to a dependency had broken something, which worked fine in a test environment, but the docker version had installed newer minor versions which broke the application.

Handling node modules with docker-compose

I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db
It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.

Production vs Development Docker setup for Node (Express & Mongo) App

I'm attempting to convert a Node app to using Docker but running into a few issues/questions I'm unable to answer.
But for simplicity I've included some very basic example files to keep the question on target. In fact the example below merely links to a Mongo container but doesn't use it in the code to keep it even simpler.
Primarily, what Dockerfile and docker-compose.yml setup is required to successfully use Docker on a Node + Express + Mongo app on both local (OS X) development and for Production builds?
Dockerfile
FROM node:6.3.0
# Create new user to avoid using root - is this correct practise?
RUN useradd --user-group --create-home --shell /bin/false app
COPY package.json /home/app/code/
RUN chown -R app:app /home/app/*
USER app
WORKDIR /home/app/code
# Should this even be set here or use docker-compose instead?
# And should there be:
# - docker-compose.yml setting it to production by default
# - docker-compose.dev.yml setting it to production?
# Or reverse it? (docker-compose.prod.yml instead with default being development?)
# Commenting below out or it will always run as production
#ENV NODE_ENV production
RUN npm install
USER root
COPY . /home/app/code
# Running chown to ensure new 'app' user owns files
RUN chown -R app:app /home/app/*
USER app
EXPOSE 3000
# What CMD should be here to ensure development versus production is simple?
# Development - Restart server and refresh browser on file changes
# Production - Ensure uptime.
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build: .
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=production
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
docker-compose.dev.yml
version: "2"
services:
web:
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=development
package.json
{
"name": "docker-node-test",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "nodemon app.js"
},
"dependencies": {
"express": "^4.14.0",
"mongoose": "^4.6.1",
"nodemon": "^1.10.2"
},
"devDependencies": {
"mocha": "^3.0.2"
}
}
1. How to handle the different NODE_ENV (dev, production, staging)?
This is my primary question and conundrum.
In the example I’ve used the NODE_ENV is set in the Dockerfile as production and there are two docker-compose files:
docker-compose.yml sets the defaults include NODE_ENV to production
docker-compose.dev.yml overrides the NODE_ENV and sets it to development
1.1. Is it advised to rather switch that order around and have development settings as the default and instead use a docker-compose.prod.yml for overrides?
1.2. How do you handle the node_modules directory?
I'm really not sure how to handle the node_modules directory at all between local development needs and then running for Production. (Perhaps I have a fundamental misunderstanding though?)
Edit:
I added a .dockerignore file and included the node_modules directory as a line. This ensures the node_modules dir is ignored during the copy, etc.
I then edited the docker-compose.yml to include the node_modules as a volume.
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
I have also put the above change into the full docker-compose.yml at the start of the question for completeness.
Is this even a solution?
Doing the above ensured I could have my local development npm install included dev-dependencies. And when running docker-compose up it pulls in the production only node modules inside the Docker container (since the default docker-compose.yml is set to NODE_ENV=production).
But it seems the NODE_ENV set inside the 2 docker-compose files aren't taken into account when running docker-compose -f docker-compose.yml build :/ I expected it to send NODE_ENV=production but ALL of the node_modules are re-installed (including the dev-dependencies).
Do we instead use 2 Dockerfiles? (Dockerfile for Prod; Dockerfile.dev for local development)
(I feel like that is a fundamental piece of logic/knowledge I am missing in the setup)
2. Nodemon vs PM2
How would one use nodemon on the local development machine but PM2 on the Production build?
3. Should you create a user inside the docker containers and then set that user to be used in the Dockerfile?
It uses root user by default but I’ve not seen many articles talking about creating a dedicated user within the container. Am I correct in what I’ve done for security? I certainly wouldn’t feel comfortable running an app as root on a non-Docker build.
Thank you for reading. Any and all assistance appreciated :)
I can share my experience, not saying it is the best solution.
I have Dockerfile and dockerfile.dev. In dockerfile.dev I install nodemon and run the app with nodemon, the NODE_ENV doesn't seem to have any impact. As for users you should not use root for security reasons. My dev version:
FROM node:16.14.0-alpine3.15
ENV NODE_ENV=development
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf
/var/cache/apk/* && npm i node-gyp#8.4.1 nodemon#2.0.15 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
# local development
CMD ["nodemon", "server.js" ]
in Production I run the app with node:
FROM node:16.14.0-alpine
ENV NODE_ENV=production
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf /var/cache/apk/* \
&& npm i node-gyp#8.4.1 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
CMD ["node", "server.js" ]
I have two separate versions of docker-compose. In docker-compose.dev.yml I set the dockerfile to dockerfile.dev:
app:
depends_on:
mongodb:
condition: service_healthy
build:
context: .
dockerfile: Dockerfile.dev
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5000" ]
interval: 180s
timeout: 10s
retries: 5
restart: always
env_file: ./.env
ports:
- "5000:5000"
environment:
...
volumes:
- /node/app/node_modules
In production docker-compose.yml there is the dockerfile set to Dockerfile.
Nodemon vs PM2. I used pm2 before dockerizing the app. I cannot see any benefit of having it in docker, the restart: always takes care about restarting on error. You should better use restart: unless_stopped but I prefer the always option. Initially I used nodemon also on production so that the app reflected the volumes changes but I skipped this because the restart didn't work well (it was waiting for some code changes..).
Users: You can see it in my example. I took a course for docker + nodejs and setting a non-root user was recommended so I do it and I have no problems.
I hope I explained well enough and it can help you. Good luck.
Either, it doesn't matter too much, I prefer to have development details then overwrite with production details.
I don't commit them to my repo, then I have "npm install" in my dockerfile.
You can set rules in the dockerfile to which one to build based on build settings.
It is typical to build everything via root, and run the main program via root. You can set up other users, but for most uses it is not needed as the idea of docker containers is to isolate each process in individual docker containers.

Resources