docker don't add new package in node_moudel nestJS project - node.js

my Dockerfile && docker-compose && docker log
my docker-compose
admin:
build:
context: ./src
dockerfile: Dockerfile
command: npm run start:dev
container_name: nestjs
ports:
- "8080:3000"
image: node:16
volumes:
- ./src:/usr/src/app/src
- ./src:/usr/src/app/src/node_modules
restart: always
tty: true
working_dir: /usr/src/app/src
my dockerfile
https://ibb.co/wWqDLCn
my dockerlog
https://ibb.co/f21S3dW

Related

Nest js Docker Cannot find module dist/main

I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.

Docker-compose confuses building a frontend with a backend

Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up

Install all needed node modules inside Docker container, not just copying them

The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:

“docker-compose up” Error : Cannot start service fetcher: OCI runtime create failed: container_linux.go:344

I'm running docker on a ubuntu server. For a while everything was working well, then the docker stoped and when I tried to put it up again i had this error :
ERROR: Service 'back' failed to build: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
Here is my docker-compose.yml
version: "2"
services:
back:
build: ./crypto_feed_back
restart: always
command: node server.js
volumes:
- .:/usr/app/
front:
build: ./crypto_feed_front
restart: always
command: npm start
volumes:
- .:/usr/app/
depends_on:
- back
fetcher:
build: ./crypto_feed_fetch
restart: always
command: /usr/sbin/crond -l 2 -f
depends_on:
- postgres
postgres:
image: postgres:12-alpine
restart: always
environment:
POSTGRES_USER: crypto_feed
POSTGRES_DB: crypto_feed
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
caddy:
image: elswork/arm-caddy:1.0.0
restart: always
environment:
ACME_AGREE: "true"
volumes:
- ./caddy/Caddyfile:/etc/Caddyfile
- ./caddy/certs:/root/.caddy
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- front
And back's Dockerfile :
FROM node:alpine
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install --production
# If you are building your code for production
# RUN npm ci --only=production
COPY ./api/ ./api/
COPY ./server.js .
EXPOSE 3001
CMD npm start
If someone has an idea on how I can solve this, it would be really nice !

How to compose a docker-compose to run npm natively and mongo in a docker

I am trying to run node natively but run mongoDB in a container, just can't figure out the best way to do it.
Should run a command in the api dir and start npm?
I am trying to be able to deploy the whole application with a docker-compose up
Here are my files:
docker-compose.yml
/api
Dockerfile
package.json
/db
This is the docker-compose.yml
version: '3'
services:
api:
container_name: fl_api
command: cd api && npm start
ports:
- "3002:3002"
environment:
DB_PORT: "27017"
DB_HOST: mongo
PORT: "3002"
networks:
- api_net
mongo:
container_name : fl_mongodb
image: mongo:4.0
volumes:
- ./db/mong-vol:/data/db
networks:
- api_net
expose:
- "27017"
healthcheck:
test: echo 'db.stats().ok' | mongo localhost:27017/zenbrain --quiet
interval: 5s
timeout: 5s
retries: 12
networks:
api_net:
driver: bridge
This is the api/Dockerfile
FROM node:6
WORKDIR /food-license-backend
COPY package.json /food-license-backend
RUN npm install
COPY . /food-license-backend
EXPOSE 3002
CMD ["npm", "run", "start"]

Resources