Why my docker-compose volume doesn't detect changement - node.js

I'm learning docker & docker-compose i have a vueJS App and a nestjs api that i try to dockerize
what i want to do is to setup volume so when i'll change something in the file i don't have to rebuild again this is what i did:
Docker-compose.yml (in a different directory)
services:
front:
build: ../ecommerce-front # I tried to add :/app but my container doesn't find app/package.json
ports:
- "8080:8080"
volumes:
- ../ecommerce-front/src
back:
build: ../ecommerce-back
ports:
- "3000:3000"
depends_on:
- mysql
volumes:
- ../ecommerce-back/src
mysql:
image: mysql:5.7
ports:
- 3306:3306
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: tools
MYSQL_PASSWORD: root
backend Dockerfile:
FROM node:14 AS builder
WORKDIR /app
COPY ./package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:10-alpine
WORKDIR /app
COPY --from=builder /app ./
CMD ["npm", "run", "start:dev"]
i have no error, my services is running well but when i make some changes to my back there's no changement in the container
What did i do wrong ?
Thanks by advance

Thanks to richardsefton it seems i needed to add node_modules as volume to my two services like this:
volumes:
- ../ecommerce-front:/app
- '/app/node_modules'
volumes:
- ../ecommerce-back:/app
- '/app/node_modules'

Related

Docker run exit wit code 0 - Node.js & Postgres

I've build a small Node.js app with a Postgres database. I wanted to Dockerize it, and everything worked fine when I was using docker-compose, now I'm trying to put it on AWS EC2, and I'm using docker run, but it doesn't work.
Docker Run command:
docker run -d -p 80:4000 --name appname hubname/appname
the container starts and stops immediatly
I'm fairly new to Docker so I guess I'm missing something.
Please find my Dockerfile:
FROM node:14 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install
COPY . .
FROM base as production
ENV NODE_PATH=./build
RUN npm start
My docker-compose:
version: '3.7'
services:
appname-db:
image: postgres
container_name: appname-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- '5432:5432'
volumes:
- appname-db:/var/lib/postgresql/data
ts-node-docker:
build:
context: .
dockerfile: Dockerfile
target: production
volumes:
- ./src:/home/node/app/src
- ./nodemon.json:/home/node/app/nodemon.json
container_name: ts-node-docker
environment:
DB_HOST: ${DB_HOST}
DB_SCHEMA: ${DB_SCHEMA}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- appname-db:
condition: service_healthy
expose:
- '4000'
ports:
- '80:4000'
stdin_open: true
command: npm run start
volumes:
appname-db:
When I do:
docker ps -a I see that the container started and exited right after.
I can't find any logs whatsoever it's very frustrating.
Any help/pointer would be really appreciated.
Edit 1 - Some additional information:
When I'm using docker-compose we can see 2 containers:
However this is what I have with docker run:
You should use CMD and not RUN when you execute the npm run start command. CMD runs every time a container is started, while RUN is just executed once, at build time.
This means you start the container, but nothing else is started inside of it. So it exits because there's nothing more to execute. RUN was already executed last time you built the container.
Use RUN for npm install, otherwise it will try every time to reinstall node_modules you already have. But for the start command – do this instead: CMD ["npm", "run", "start"]
In case it could help you I show you my working docker-compose.yml and Dockerfile. Express API with a Postgres DB.
This is tested and works both on an Ubuntu server and on a dev machine (with docker-compose up for both environments). Note that there are two Dockerfiles (dev/prod).
You don't need to add password/user stuff in your docker-compose, you can just include your env_file and Docker/docker-compose sets everything up correctly for you.
version: "3.8"
services:
# D A T A B A S E #
db:
env_file:
- ./.env
image: postgres
container_name: "db"
restart: always
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./db-data/:/var/lib/postgresql/data/
# A P I #
api:
container_name: "api"
env_file:
- ./.env
build:
context: .
dockerfile: "DOCKERFILE.${NODE_ENV}"
ports:
- ${API_PORT}:${API_PORT}
depends_on:
- db
restart: always
volumes:
- /app/node_modules
- .:/usr/src/service
working_dir: /usr/src/service
# A D M I N E R #
adminer:
env_file:
- ./.env
container_name: "adminer"
image: adminer:latest
restart: always
ports:
- ${ADMINER_PORT}:${ADMINER_PORT}
depends_on:
- db
Dockerfile.dev:
FROM node:14.17.3
ENV NODE_ENV=dev
WORKDIR /
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "start"]
Dockerfile.prod is pretty much the same, just change the environment to prod.
I see is that you include the full path from your home directory when copying, try without that. Just copy from the location relative to your Dockerfile at /.

Nest js Docker Cannot find module dist/main

I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.

Docker nginx image issue

I’m newbie in using docker, but I want to dockerize my node.js + nginx + react.js app via docker compose and I get this error when trying create nginx image (login via docker hub doesn’t help):
ERROR [nodereactcrm_client] FROM docker.io/library/build:latest
failed to solve: rpc error: code = Unknown desc = failed to load cache key: pull access denied, repository
does not exist or may require authorization: server message: insufficient_scope: authorization failed
My react Dockerfile:
FROM node:alpine as builder
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
FROM nginx
COPY --from=build /home/node/dist /usr/share/nginx/html
COPY ./default.conf /etc/nginx/conf.d
My docker-compose file:
version: '3'
services:
db:
container_name: db
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
api:
build:
dockerfile: Dockerfile
context: ./server
volumes:
- /app/node_modules
- ./server:/app
links:
- db
ports:
- '5000:5000'
depends_on:
- db
client:
build:
dockerfile: Dockerfile
context: ./client
volumes:
- /app/node_modules
- ./client:/app
links:
- api
ports:
- '80:80'
As #anemyte answered - there was a typo. I needed just change COPY --from=build to COPY --from=builder

Install all needed node modules inside Docker container, not just copying them

The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:

Environment variables not loading inside docker container

I have two services which are dockerized. The first service is the api service which depends on the mongo service. However, for some reason, the api service can't connect to the mongo service because the environment variables inside the container are not been passed around well. I have two variables in my .env namely
JWT_SECRET=somesecret
DB_URL_LOCAL=mongodb://mongo:27017/apii
I tried setting the environment variables in the docker-compose file, when I run docker-compose config I see the variables and their values displayed, but my application can't detect these variables. MongoDB keeps throwing UnhandledPromiseRejectionWarning: MongoParseError: URI malformed, cannot be parsed error.
I also tried to set the environment variables in the main Dockerfile but it still didn't work. Please I need some assistance. Here is a copy of my Dockerfile and docker-compose.yml file.
This is the Dockerfile
FROM node:12-alpine AS base
WORKDIR /usr/src/app
FROM base AS build
COPY package*.json .babelrc ./
RUN npm install
COPY ./src ./src
RUN npm run build
RUN npm prune --production
FROM base AS release
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
USER node
EXPOSE 4000
CMD [ "node", "./dist/server/server.js" ]
This is the docker-compose.yml file
version: '3.8'
services:
api:
build: .
image: api
working_dir: /usr/src/app
expose:
- 4000
ports:
- '4000:4000'
depends_on:
- mongo
mongo:
container_name: mongo
env_file:
- .env
environment:
- DB_URL_LOCAL=$DB_URL_LOCAL
- JWT_SECRET=$JWT_SECRET
image: mongo
volumes:
- './data:/data/db'
ports:
- 27017:27017
restart: always
Thank you.
In docker-compose.yml, environment variables should be declared in api service not mongo service with something like this MONGO_INITDB_DATABASE=${DATABASE}
Use this, env_file is an alternative, read more:
version: '3.8'
services:
api:
build: .
image: api
working_dir: /usr/src/app
expose:
- 4000
ports:
- '4000:4000'
env_file:
- ./secret.env
environment:
- DB_URL_LOCAL=${DB_URL_LOCAL}
- JWT_SECRET=${JWT_SECRET}
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- './data:/data/db'
ports:
- 27017:27017
restart: always

Resources