Start React server container without Nginx [closed] - node.js

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
I need advice from someone who really understands React app dockerization.
I will be as brief as I can.
We have 3 containers now -- DB, backend, frontend.
Our Frontend Dockerfile is below:
FROM node:16-buster-slim as builder
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
FROM nginx:1.21.1-alpine
RUN rm -rf /etc/nginx/conf.d
COPY localconf /etc/nginx
COPY --from=builder /app/build/ /usr/share/nginx/html/
WORKDIR /usr/share/nginx/html
COPY ./env-config.* ./
COPY ./env.sh .
RUN chmod +x env.sh
RUN apk add --no-cache bash openssl
RUN chmod +x env.sh
CMD ["/bin/bash", "-c", "env && /usr/share/nginx/html/env.sh && nginx -g \"daemon off;\""]
The whole problem is that developers can't track changes in real time with this container. They run frontend on the host machine with yarn build && yarn start and when the changes are stable and ready, they build the container.
Now I need help to investigate why new container is not working.
I have reduced the Dockerfile to the following :
FROM node:16-buster-slim
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
And to watch changes in real time I added volume at docker-compose.yaml file:
frontend:
container_name: frontend
build:
context: ../../client/project
dockerfile: local.dockerfile
ports:
- "80:80"
- "443:443"
command: yarn start
env_file:
- ../../client/project/.env
volumes:
- ../../client/project:/app/
- /app/node_modules
restart: on-failure
depends_on:
- backend
I don't know why, but application doesn't respond on any request and refuse any connection to localhost:443, where it works fine with Nginx.
So, please, could you tell some best-practices to Deckerize React App without Nginx to check real-time changes?
I know for sure it is not an unusual task, but I didn't find anything with Docs or Google.

When the "yarn start" command is launched, it refers to your "package.json" -> "script" -> "start"
I suggest you install the "nodemon" package like this:
yarn add nodemon
nodemon allows the server to be restarted each time the files are updated
Create the following line in your package.json:
{
"scripts": {
"dev": "nodemon ./bin/www.js",
...
}
...
}
To replace "./bin/www.js" by the entry point of your application
In your Dockerfile change
CMD["yarn", "start"]
by
CMD["yarn", "dev"]
Which would give:
FROM node:16-buster-slim
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# add app
COPY . ./
RUN yarn build
CMD ["yarn", "dev"]

Related

After adding volumes to docker-compose, changes are not being picked up for frontend files

So I have this working as expected with flask where I used...
volumes:
- ./api:/app
And any files that I change in the api are picked up by the running session. I'd like to do the same for the frontend code.
For node/nginx, I used the below configuration. The only way for the file changes to be picked up is if I rebuild. I'd like for file changes to be picked up as they do for python but a bit stuck on why similar set up is not working for src files. Anyone know why this might be happening?
local path structure
public\
src\
Dockerfile.client
docker--compose.yml
docker file...
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/nginx.conf
docker-compose
client:
build:
context: .
dockerfile: Dockerfile.client
volumes:
- ./src:/src
restart: always
ports:
- "80:80"
depends_on:
- api
This is happening because you are building the application.
...
RUN yarn build
...
and them using your build folder:
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
I believe that what you are looking for is a live reload. You can find a good example here.
But basically what you need is a Dockerfile like this:
# Dockerfile
# Pull official Node.js image from Docker Hub
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Expose container port 3000
EXPOSE 3000
# Run "start" script in package.json
CMD ["npm", "start"]
your npm start script:
"start": "nodemon -L server/index.js"
and your volume:
volumes:
- ./api:/usr/src/app/serve

Install Node dependencies in Debug Container

I am currently setting up a Docker container that will be used to Debug a NodeJS application. This container needs to support live-reloading (using nodemon) and needs to be a Linux container (my workstation is a Windows machine).
My current setup is the following:
Dockerfile.debug
FROM node:current-alpine
VOLUME /app
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production --registry=http://172.16.102.123:8182/repository/npm/
RUN npm install -g nodemon
ENV NODE_ENV=test
EXPOSE 8000
EXPOSE 9229
CMD [ "nodemon", "--inspect=0.0.0.0:9229", "--ignore", "dist/test/**/*.js", "dist/index.js" ]
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile.debug
volumes:
- .:/app
- /app/node_modules
ports:
- 8000:8000
Everything works fine except the dependencies because some of these are plattform specific. That means, it is not possible to simply mount the node_modules directory into the container (like I do with the rest of the codebase). I tried setting up my files in such a way, that the dependencies are different for each platform but I either end up with an empty node_modules directory or with the node_modules directory from the host (the current set up gives me an empty directory). Does anybody know how to fix my problem? I have looked at other solutions (like this one) but they did not work.

Run 2 different containers from same node project

I have a node project that has a web server and a service on the root.
--myNodeProj
--app.js //the web server
--service.js //an update service
In my package.json I have the following:
"scripts": {
"start": "node app.js",
"service": "node service.js"
},
For my DockerFile I have:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
The CMD will run the app.js (webserver). How do I build another container with the service? Do I create another Dockerfile? Would the docker build command look different?
You can override the command -
docker run <image> node service.js
https://docs.docker.com/engine/reference/run/#general-form
I ended up using docker-compose.
You need to create a docker-compose.yml file with the following code:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
ports:
- "3000:3000"
env_file:
- web.env
service:
# will build ./docker/service/Dockerfile
build:
context: .
dockerfile: ./docker/service/Dockerfile
env_file:
- service.env
This files reference 2 Dockerfiles that build the containers:
For service
FROM node:8
# Create app directory
WORKDIR /usr/src/service
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
CMD [ "node", "service.js" ]
For web:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
#EXPOSE 8080
CMD [ "npm", "start" ]
Notice that I can only do one NPM start. I call the service directly using node.
When I want to build containers, I issue the command:
docker-compose build

Docker with nodemon does not reload my api when code changes

I've been working with docker some weeks ago and I was able to hold this issue, stop docker containers and start them over again to see the changes that I had made in my code but now is really anoying because every single change I do have to kill docker and then "docker-compose up".
However my friend is using the same container on his apple machine but when he makes changes to any server side code he does not have to restart his app.
I can see the changes when I go into the container but those changes are not reflected on live(browser).
My Dockerfile
FROM node:8.11.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Copy application files
COPY tools ./tools/
COPY migrations ./migrations/
COPY seeds ./seeds/
# Attempts to copy "build" folder even if it doesn't exist
COPY .env build* ./build/
RUN npm install -g nodemon
RUN git clone https://github.com/vishnubob/wait-for-it.git
EXPOSE 8080
CMD ["nodemon", "-L", "server"]
My docker-compose.yml
api:
build: ./
hostname: api
container_name: api
ports:
- "${APP_PORT}:3000"
volumes:
- ./:/usr/src/app
env_file:
- ".env"
command: node tools/run.js
Any sugestion?

Production vs Development Docker setup for Node (Express & Mongo) App

I'm attempting to convert a Node app to using Docker but running into a few issues/questions I'm unable to answer.
But for simplicity I've included some very basic example files to keep the question on target. In fact the example below merely links to a Mongo container but doesn't use it in the code to keep it even simpler.
Primarily, what Dockerfile and docker-compose.yml setup is required to successfully use Docker on a Node + Express + Mongo app on both local (OS X) development and for Production builds?
Dockerfile
FROM node:6.3.0
# Create new user to avoid using root - is this correct practise?
RUN useradd --user-group --create-home --shell /bin/false app
COPY package.json /home/app/code/
RUN chown -R app:app /home/app/*
USER app
WORKDIR /home/app/code
# Should this even be set here or use docker-compose instead?
# And should there be:
# - docker-compose.yml setting it to production by default
# - docker-compose.dev.yml setting it to production?
# Or reverse it? (docker-compose.prod.yml instead with default being development?)
# Commenting below out or it will always run as production
#ENV NODE_ENV production
RUN npm install
USER root
COPY . /home/app/code
# Running chown to ensure new 'app' user owns files
RUN chown -R app:app /home/app/*
USER app
EXPOSE 3000
# What CMD should be here to ensure development versus production is simple?
# Development - Restart server and refresh browser on file changes
# Production - Ensure uptime.
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build: .
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=production
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
docker-compose.dev.yml
version: "2"
services:
web:
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=development
package.json
{
"name": "docker-node-test",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "nodemon app.js"
},
"dependencies": {
"express": "^4.14.0",
"mongoose": "^4.6.1",
"nodemon": "^1.10.2"
},
"devDependencies": {
"mocha": "^3.0.2"
}
}
1. How to handle the different NODE_ENV (dev, production, staging)?
This is my primary question and conundrum.
In the example I’ve used the NODE_ENV is set in the Dockerfile as production and there are two docker-compose files:
docker-compose.yml sets the defaults include NODE_ENV to production
docker-compose.dev.yml overrides the NODE_ENV and sets it to development
1.1. Is it advised to rather switch that order around and have development settings as the default and instead use a docker-compose.prod.yml for overrides?
1.2. How do you handle the node_modules directory?
I'm really not sure how to handle the node_modules directory at all between local development needs and then running for Production. (Perhaps I have a fundamental misunderstanding though?)
Edit:
I added a .dockerignore file and included the node_modules directory as a line. This ensures the node_modules dir is ignored during the copy, etc.
I then edited the docker-compose.yml to include the node_modules as a volume.
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
I have also put the above change into the full docker-compose.yml at the start of the question for completeness.
Is this even a solution?
Doing the above ensured I could have my local development npm install included dev-dependencies. And when running docker-compose up it pulls in the production only node modules inside the Docker container (since the default docker-compose.yml is set to NODE_ENV=production).
But it seems the NODE_ENV set inside the 2 docker-compose files aren't taken into account when running docker-compose -f docker-compose.yml build :/ I expected it to send NODE_ENV=production but ALL of the node_modules are re-installed (including the dev-dependencies).
Do we instead use 2 Dockerfiles? (Dockerfile for Prod; Dockerfile.dev for local development)
(I feel like that is a fundamental piece of logic/knowledge I am missing in the setup)
2. Nodemon vs PM2
How would one use nodemon on the local development machine but PM2 on the Production build?
3. Should you create a user inside the docker containers and then set that user to be used in the Dockerfile?
It uses root user by default but I’ve not seen many articles talking about creating a dedicated user within the container. Am I correct in what I’ve done for security? I certainly wouldn’t feel comfortable running an app as root on a non-Docker build.
Thank you for reading. Any and all assistance appreciated :)
I can share my experience, not saying it is the best solution.
I have Dockerfile and dockerfile.dev. In dockerfile.dev I install nodemon and run the app with nodemon, the NODE_ENV doesn't seem to have any impact. As for users you should not use root for security reasons. My dev version:
FROM node:16.14.0-alpine3.15
ENV NODE_ENV=development
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf
/var/cache/apk/* && npm i node-gyp#8.4.1 nodemon#2.0.15 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
# local development
CMD ["nodemon", "server.js" ]
in Production I run the app with node:
FROM node:16.14.0-alpine
ENV NODE_ENV=production
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf /var/cache/apk/* \
&& npm i node-gyp#8.4.1 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
CMD ["node", "server.js" ]
I have two separate versions of docker-compose. In docker-compose.dev.yml I set the dockerfile to dockerfile.dev:
app:
depends_on:
mongodb:
condition: service_healthy
build:
context: .
dockerfile: Dockerfile.dev
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5000" ]
interval: 180s
timeout: 10s
retries: 5
restart: always
env_file: ./.env
ports:
- "5000:5000"
environment:
...
volumes:
- /node/app/node_modules
In production docker-compose.yml there is the dockerfile set to Dockerfile.
Nodemon vs PM2. I used pm2 before dockerizing the app. I cannot see any benefit of having it in docker, the restart: always takes care about restarting on error. You should better use restart: unless_stopped but I prefer the always option. Initially I used nodemon also on production so that the app reflected the volumes changes but I skipped this because the restart didn't work well (it was waiting for some code changes..).
Users: You can see it in my example. I took a course for docker + nodejs and setting a non-root user was recommended so I do it and I have no problems.
I hope I explained well enough and it can help you. Good luck.
Either, it doesn't matter too much, I prefer to have development details then overwrite with production details.
I don't commit them to my repo, then I have "npm install" in my dockerfile.
You can set rules in the dockerfile to which one to build based on build settings.
It is typical to build everything via root, and run the main program via root. You can set up other users, but for most uses it is not needed as the idea of docker containers is to isolate each process in individual docker containers.

Resources