Creating React application for production with Docker build? - node.js

I am creating a React application using docker build with the following Dockerfile:
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
RUN npm install react-scripts -g
RUN npm install --save #fortawesome/fontawesome-free
RUN apk add nano
RUN apk add vim
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I believe the Dockerfile is not of extreme importance here however. In my source code there is a master configuration file, which I want to leave out of the docker image to be able to deploy my React App easily. This causes a compilation error during the Dockerfile command RUN npm run build, since the compilator does not find a file that is referenced by another file. For development versions this was not an issue, since npm start is not that sensitive.
I would add the configuration file as a docker volume in the final application, so the code would be able to find it without problems. I am just wondering how to approach a situation like this, since it has not come up earlier on my path?
Also feel free to comment on or optimize my Dockerfile, as I am unsure of e.g. whether Nginx is the way to go in these production builds for website front-end applications.

If your app currently requires the configuration file, it's akin to "hard-coding" the values into it at build time, as you've noticed. If you do need to be able to dynamically swap in another configuration file at runtime, you would need to use e.g. fetch() to load it, not bundle it (as require does).
If configuring things at build-time is fine, then I'd also suggest looking at CRA custom environment variables; you could then inject the suitable values as environment variables at build time.
Beyond that, if you're looking for critique for your Dockerfile, from one Aarni to another:
Your package.json is broken if you need to do anything beyond npm ci or yarn during a build to install stuff. react-scripts should be a dev dependency and Font Awesome should be a regular dependency.
You don't need nano and vim in the temporary container, and even if you did, it'd be better to apk add them in a single step.
You shouldn't need to modify the PATH in the build container.
Using Nginx is absolutely fine.
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Here is a sample of my react docker file. May be you can use this if you want to optimize.
PS: i am running it from kubernetes.
# ############################# Stage 0, Build the app #####################
# pull official base image
FROM node:13.12.0-alpine as build-stage
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package*.json ./
#RUN npm install
RUN npm install
# add app
COPY . ./
#build for production
RUN npm run-script build
# #### Stage 1, push the compressed built app into nginx ####
FROM nginx:1.17
COPY --from=build-stage /app/build/ /usr/share/nginx/html

Related

SvelteKit change port to different one

I want to change port in production build of node-adapter. I use port 3000 internally for different service, so I need to change Svelte app to have port 3001.
My Dockerfile:
# Container setup ---------
FROM node:16-alpine as build
RUN mkdir /appbuild
COPY . /appbuild
WORKDIR /appbuild
# clean install all dependencies
RUN npm ci
# remove potential security issues
RUN npm audit fix
RUN npm run build
# Container setup ---------
FROM node:16-alpine
WORKDIR /app
# copy dependency list
COPY --from=build /appbuild/package*.json ./
# clean install dependencies, no devDependencies, no prepare script
RUN npm ci --production --ignore-scripts
# remove potential security issues
RUN npm audit fix
# copy built SvelteKit app to /app
COPY --from=build /appbuild/build ./
CMD ["node", "./index.js"]
I found that if I start app like PORT=3001 node build from root (after calling npm run build) it works, but for dockerfile, CMD must be provided and PORT cannot be passed.
How to change default port to another one?
Have you tried using ENV?
ENV PORT=3001
CMD ["node", "./index.js"]

How to write Dockerfile to serve Angular app and Node server

My Angular app runs fine locally but I haven't figured out how to do the same with a Docker image. Outside of Docker, the UI runs on port 4200 with ng serve and the API serves data from 8080 with node server.js.
My Dockerfile is set up so it can get the Node server running and available on 8080, but the Angular UI won't run. I've tried several options but right now I have:
FROM node:14.17.3
COPY package*.json ./
EXPOSE 4200 8080
RUN npm install -g #angular/cli
RUN npm install --only=production
COPY . ./
RUN ng serve
CMD ["node", "server.js"]
It fails on ng serve with the error: The serve command requires to be run in an Angular project, but a project definition could not be found. I do have an angular.json file in the root. I'm not sure what I am missing. I read that ng serve shouldn't be used in this situation but the alternatives I've seen haven't made a difference.
Workspace:
EDIT 8/10/21: Based on the answers here and a bunch of research, this will display the UI with nginx:
FROM node:12.16.1-alpine as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# RUN npm install -g #angular/cli
# RUN npm run build --prod
FROM nginx:1.15.8-alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
# CMD ["node", "server.js"]
However, the npm run build step fails because ng is not found despite installing #angular/cli. I have to run this manually to build the dist folder. And I can't run node server.js alongside this. It seems I can only get the front end or back end, not both.
Use below command at the end to run ng serve with host 0.0.0.0 which means it listens to all interfaces.
CMD ["ng","serve","--host", "0.0.0.0"]
But I would suggest using ngInx.
Steps to follow:
Create a docker file under the root of your project, and add the below code. It takes care of: downloading dependencies, building angular project, and deploy it to ngInx server.
#Download Node Alpine image
FROM node:12.16.1-alpine As build
#Setup the working directory
WORKDIR /usr/src/ng-app
#Copy package.json
COPY package.json package-lock.json ./
#Install dependencies
RUN npm install
#Copy other files and folder to working directory
COPY . .
#Build Angular application in PROD mode
RUN npm run build
#Download NGINX Image
FROM nginx:1.15.8-alpine
#Copy built angular files to NGINX HTML folder
COPY --from=build /usr/src/ng-app/dist/pokemon-app/ /usr/share/nginx/html
Build docker image:
docker build -t my-ng-app .
Spinning the docker container with below command expose your app at port 80
docker run -dp 3000:80 my-ng-app
Check out my article on this - https://askudhay.com/how-to-dockerize-an-angular-application, and please let me know if you still have any questions.
I figured out a solution that will run the full application. Most answers here focus on running the front end (the nginx suggestion was helpful). It seemed a Docker container could enable the UI or server but not both. I came across Docker Compose, which will run the front and back ends in separate images. My solution:
Dockerfile.ui
# Define node version
FROM node:12.16.1-alpine as build
# Define container directory
WORKDIR /usr/src/app
# Copy package*.json for npm install
COPY package*.json ./
# Run npm clean install, including dev dependencies for #angular-devkit
RUN npm ci
# Run npm install #angular/cli
RUN npm install -g #angular/cli
# Copy all files
COPY . .
# Run ng build through npm to create dist folder
RUN npm run build --prod
# Define nginx for front-end server
FROM nginx:1.15.8-alpine
# Copy dist from ng build to nginx html folder
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
Dockerfile.server
# Define node version
FROM node:12.16.1-alpine
# Define container directory
WORKDIR /usr/src/app
# Copy package*.json for npm install
COPY package*.json ./
# Run npm clean install, prod dependencies only
RUN npm ci --only=production
# Copy all files
COPY . .
# Expose port 8080 for server
EXPOSE 8080
# Run "node server/run.js"
CMD ["node", "server/run.js"]
docker-compose.yml
version: '3'
services:
server:
build:
context: ./
dockerfile: Dockerfile.server
container_name: server
ports:
- 8080:8080
ui:
build:
context: ./
dockerfile: Dockerfile.ui
container_name: ui
ports:
- 4200:80
links:
- server
docker-compose up will build out an image for server and UI and deploy concurrently. I also resolved the ng not found errors by installing dev dependencies, particularly #angular-devkit/build-angular.
This tutorial helped me figure out Docker Compose: https://wkrzywiec.medium.com/how-to-run-database-backend-and-frontend-in-a-single-click-with-docker-compose-4bcda66f6de
I think updating this line
COPY . ./
with
COPY . ./app
should solve that error. It appears that the node "volume" is in that folder.
Otherwise setting the workdir also seems like a solution:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
...
Source: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/

Docker app won`t work but application works if not in docker

I have built a react and node app and it works. I am trying to build an docker image and run it it compiles but when i try to access it thru thr browser it says This site can’t be reachedlocalhost refused to connect. This is the dockerfile that i`ve written because I think this is the problem:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /client
# add `/app/node_modules/.bin` to $PATH
ENV PATH /client/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
FROM node:12
# Create app directory
WORKDIR ./
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 5000
CMD [ "node", "index.js" ]
First make sure you can reach your server - you app may be running locally but it doesn't use the same server as the one from your dockerfile e.g. npm start starts a local server.
Remove everything related to the react app stage from your dockerfile and just make sure you can hit your server on port 5000 and that it servers pages - you can host an index.html with <body>Hello world</body> or something. When you're able to do that it's just a matter of adding the react app bundled files to the server static (public) folder
Is your node server listening on port 5000 ?
Typically with node js server you have to set a PORT and HOST env variables that are used by the server like process.env.PORT and process.env.HOST
Other issues
# start app
CMD ["npm", "start"]
npm start is (usually) used to start a local server while you develop your react app
Instead of starting the app you should run a bundle command that will produce static files to be hosted - js html etc...
Typically npm run build
This should make a folder ./dist or ./build with all the static content you should put on the server (perhaps that's your /client folder for?)
Some tweaks
# pull official base image
FROM node:13.12.0-alpine AS builder
# set working directory
WORKDIR /client
# Copy source
COPY . .
# add `/app/node_modules/.bin` to $PATH (you probably don't need this)
ENV PATH /client/node_modules/.bin:$PATH
# install app dependencies
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent (you can move this to dependencies)
# Build bundle
RUN npm run build
# Next stage
FROM node:12
# Create app directory
WORKDIR ./
# Copy source
COPY . .
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Copy js bundle
COPY --from=builder /client/dist ./public
EXPOSE 5000
CMD [ "node", "index.js" ]
This might need a little more tweaking but that's the general idea
you don't need to install react-scripts globally it can just be a local dependency - it's used by npm start and npm build to build your app so you depend on it - it should be part of package.json dependencies
Adding /client/node_modules/.bin to PATH shouldn't be needed, maybe it's a leftover or a tell that something else isn't properly setup

Docker + Nodejs Getting Error: Cannot find module "for a module that I wrote"

I am Docker beginner.
I was able to implement docker for my nodejs project, but when I try to pull it I am getting the error
Error: Cannot find module 'my_db'
(my_db is a module that I wrote that handles my mysql functionality).
So I am guessing my modules are not bundled into the docker image, right?
I moved my modules to a folder name my_node_modules/ so they won't be ignored.
I also modified the Dockerfile as follow:
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./my_node_modules/*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD node index.js
What am I missing?
Thanks
I would do something like this. First create a .dockerignore:
.git
node_modules
The above ensures that the node_modules folder is excluded from the actual build context.
You should add any temporary things to your .dockerignore. This will also speed up the actual build, since the build context will be smaller.
In my docker file I would then first only copy package.json and any existing lock file in order to be able to cache this layer:
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
# Only copy package* before installing to make better use of cache
COPY package*.json .
RUN npm install --production --silent
# Copy everything
COPY . .
EXPOSE 3000
CMD node index.js
Like I also wrote in my comment, I have no idea why you are doing this mv node_modules ../? This will move the node_modules directory out from the /usr/src/app folder, which is not what you want.
It would also be nice to see how you are actually including your module.
If you own module resides in the following folder my_node_modules/my_db it will be copied when doing COPY . . in the above docker file. Then in your index.js file you should be able to use the module like this:
const db = require('./my_node_modules/my_db');
COPY . . this step will override everything in the current directory and copying node modules from Host is not recommended and maybe it breaks the container in case of host biners compiled for Window and you are using Linux container.
So better to refactor your Dockerfile and install modules inside docker instead of copying from the host.
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent
EXPOSE 3000
CMD node index.js
Also will suggest using .dockerignore
# add git-ignore syntax here of things you don't want copied into docker image
.git
*Dockerfile*
*docker-compose*
node_modules

docker build + private NPM (+ private docker hub)

I have an application which runs in a Docker container. It requires some private modules from the company's private NPM registry (Sinopia), and accessing these requires user authentication. The Dockerfile is FROM iojs:latest.
I have tried:
1) creating an .npmrc file in the project root, this actually makes no difference and npm seems to ignore it
2) using env variables for NPM_CONFIG_REGISTRY, NPM_CONFIG_USER etc., but the user doesn't log in.
Essentially, I seem to have no way of authenticating the user within the docker build process. I was hoping that someone might have run into this problem already (seems like an obvious enough issue) and would have a good way of solving it.
(To top it off, I'm using Automated Builds on Docker Hub (triggered on push) so that our servers can access a private Docker registry with the prebuilt images.)
Are there good ways of either:
1) injecting credentials for NPM at build time (so I don't have to commit credentials to my Dockerfile) OR
2) doing this another way that I haven't thought of
?
I found a somewhat elegant-ish solution in creating a base image for your node.js / io.js containers (you/iojs):
log in to your private npm registry with the user you want to use for docker
copy the .npmrc file that this generates
Example .npmrc:
registry=https://npm.mydomain.com/
username=dockerUser
email=docker#mydomain.com
strict-ssl=false
always-auth=true
//npm.mydomain.com/:_authToken="someAuthToken"
create a Dockerfile that copies the .npmrc file appropriately.
Here's my Dockerfile (based on iojs:onbuild):
FROM iojs:2.2.1
MAINTAINER YourSelf
# Exclude the NPM cache from the image
VOLUME /root/.npm
# Create the app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy npm config
COPY .npmrc /root/.npmrc
# Install app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
# Run
CMD [ "npm", "start" ]
Make all your node.js/io.js containers FROM you/iojs and you're good to go.
In 2020 we've got BuildKit available. You don't have to pass secrets via COPY or ENV anymore, as it's not considered safe.
Sample Dockerfile:
# syntax=docker/dockerfile:experimental
FROM node:13-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN --mount=type=ssh --mount=type=secret,id=npmrc,dst=$HOME/.npmrc \
yarn install --production --ignore-optional --frozen-lockfile
# More stuff...
Then, your build command can look like this:
docker build --no-cache --progress=plain --secret id=npmrc,src=/path-to/.npmrc .
For more details, check out: https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information
For those who are finding this article via google and are still looking for an alternative way that doesn't involve leaving you private npm tokens on your docker images and containers:
We were able to get this working by doing the npm install prior to the docker build (By doing this it lets you have your .npmrc outside of your image\container). Once the private modules have been installed locally you can copy your files across to the image as part of your build:
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
You also need to make sure that your .dockerignore file doesn't exclude the node_modules folder.
Once you have the folder copied into your image, the trick is to to npm rebuild instead of npm install. This will rebuild any native dependancies that are effected by any differences between your build server and your docker OS:
FROM nodesource/vivid:LTS
# For application location, default from nodesource is /usr/src/app
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN npm rebuild
CMD npm start
I would recommend not using a .npmrc file but instead use npm config set. This works like a charm and is much cleaner:
ARG AUTH_TOKEN_PRIVATE_REGISTRY
FROM node:latest
ARG AUTH_TOKEN_PRIVATE_REGISTRY
ENV AUTH_TOKEN_PRIVATE_REGISTRY=${AUTH_TOKEN_PRIVATE_REGISTRY}
WORKDIR /home/usr/app
RUN npm config set #my-scope:registry https://my.private.registry && npm config set '//my.private.registry/:_authToken' ${AUTH_TOKEN_PRIVATE_REGISTRY}
RUN npm ci
CMD ["bash"]
The buildkit answer is correct, except it runs everything as root which is considered a bad security practice.
Here's a Dockerfile that works and uses the correct user node as the node Dockerfile sets up. Note the secret mount has the uid parameter set, otherwise it mounts as root which user node can't read. Note also the correct COPY commands that chown to user:group of node:node
FROM node:12-alpine
USER node
WORKDIR /home/node/app
COPY --chown=node:node package*.json ./
RUN --mount=type=secret,id=npm,target=./.npmrc,uid=1000 npm ci
COPY --chown=node:node index.js .
COPY --chown=node:node src ./src
CMD [ "node", "index.js" ]
#paul-s Should be the accepted answer now because it's more recent IMO. Just as a complement, you mentioned you're using the docker/build-push-action action so your workflow must be as following:
- uses: docker/build-push-action#v3
with:
context: .
# ... all other config inputs
secret-files: |
NPM_CREDENTIALS=./.npmrc
And then, of course, bind the .npmrc file from your dockerfile using the ID you specified. In my case I'm using a Debian based image (uid starts from 1000). Anyways:
RUN --mount=type=secret,id=NPM_CREDENTIALS,target=<container-workdir>/.npmrc,uid=1000 \
npm install --only=production

Resources