It's kind of weird, I didn't see any errors when run docker-compose build but when I access into container, I didn't see node_modules folder, just only package.json and package-lock.json files.
Any the logs when I run docker-compose build show normally. For example:
....
added 552 packages from 678 contributors and audited 4007 packages in 11.2s
found 0 vulnerabilities
....
My Dockerfile:
FROM node:10
ENV APP_PATH /app
WORKDIR $APP_PATH
COPY ./app_front/package*.json ./
RUN npm install -g ts-node typescript
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "run", "start" ]
And docker-compose file
version: "3.5"
services:
mysql:
image: mysql:5.7
container_name: racer_mysql
restart: always
environment:
MYSQL_DATABASE: xxxx
MYSQL_USER: "xxxx"
MYSQL_PASSWORD: "xxxx"
MYSQL_ROOT_PASSWORD: xxxx
ports:
- "3306:3306"
expose:
- "3306"
volumes:
- ./docker/database:/var/lib/mysql
- ./docker/mysql_init:/docker-entrypoint-initdb.d
app_front:
container_name: app_front
build:
context: .
dockerfile: ./docker/app_front/Dockerfile
volumes:
- ./app_front:/app_front
ports:
- "3000:3000"
Note:
But when I add multiple commands in docker-compose, it works. But I want to install packages in Dockerfile, not docker-compose file.
command:
- /bin/bash
- -c
- |
npm install
npm start
My problem is exactly the same with this issue, I have tried, but without success.
Any advice is welcomed!
What might be happening is COPY . . is overwriting your node_modules directory in the image. Create a .dockerignore file in the same directory as your Dockerfile and ignore node_modules.
I think you build your App in /app_front/ folder in build Process then you delete ""overwriting" all the files in that folder by using :
volumes:
- ./app_front:/app_front
in your compose , I suggest you to remove the volumes section and try again.
Related
I am setting up Docker for development purposes. With separate Dockerfile.dev for a NextJS front-end and an ExpressJS-powered back-end. For the NextJS application, the volume mounting is done such that the application runs perfectly fine in the dev server, in the container. However, due to empty node_modules directory in the host, I enter the following command in a new terminal instance:
docker-compose exec frontend npm install.
This gives me my needed modules so that I get normal linting while developing.
However, on starting up the Express backend, I have issues with installing mongodb from npm (module not found when running the container). So I resorted to the same strategy by performing docker-compose backend npm install and everything works. However, the node_modules directory in the host is still empty, which is not the same behaviour as when I had done it with the frontend.
Why is this the case?
#Dockerfile.dev (frontend)
FROM node:19-alpine
WORKDIR "/app"
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
#Dockerfile.dev (backend)
FROM node:19-alpine
WORKDIR "/app"
RUN npm install -g nodemon
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
docker-compose.yml:
version: '3'
services:
server:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
ports:
- "5000:5000"
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- "3000:3000"
mongodb:
image: mongo:latest
volumes:
- /home/panhaboth/Apps/home-server/data:/data/db
ports:
- "27017:27017"
I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.
I generated a Gatsby website localy. Then I setup a Docker container to run it. It's a standard setup : copying package.json, installing the modules, copying the local files and starting the dev script with a shared volume.
But when it starts, I run into an error :
gatsby_1 | ERROR
gatsby_1 |
gatsby_1 | There was a problem loading the local develop command. Gatsby may not be installed in your site's "node_modules" directory. Perhaps you need to run "npm install"? You might need to delete your "package-lock.json" as well.
Here is the Dockerfile :
FROM node:14.11.0
RUN npm install -g gatsby-cli
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["gatsby", "develop", "-H", "0.0.0.0"]
And the docker-compose.yml :
version: "3"
services:
db:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db:/var/lib/mysql
adminer:
image: adminer:latest
ports:
- 8082:8080
wordpress:
image: wordpress:5.5.1-php7.4-apache
ports:
- 8081:80
volumes:
- ./wordpess/plugins:/var/www/html/wp-content/plugins
- ./wordpess/themes:/var/www/html/wp-content/themes
gatsby:
build: ./gatsby
ports:
- 8080:8000
volumes:
- ./gatsby:/app
But, if I remove the volume for the Gatsby container, everything rune well.
So my guess is there's something wrong with permissions somewhere, but I can't figure it out.
Here is a repo with the whole project.
I am using Typescript here and using node:latest in Docker, and I am using docker-compose as well,
I always failed to run it with docker-compose, when I run docker run ( manual ) it was work well,
here is my Dockerfile
FROM node:latest
RUN mkdir -p /home/myapp
WORKDIR /home/myapp
RUN npm i -g prisma2
ENV PATH /home/myapp/node_modules/.bin:$PATH
COPY package.json /home/myapp/
RUN npm install
COPY . /home/myapp
RUN prisma2 lift save --name 'init'
RUN prisma2 lift up
EXPOSE 8100
RUN npm run build
RUN pwd
RUN ls
RUN ls dist
CMD node dist/server.js
and my docker-compose.yml:
version: "3"
services:
app:
environment:
DB_URI: postgres://myuser:password#postgres:5555/prod
NODE_ENV: production
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules
ports:
- "8100:8100"
postgres:
container_name: postgres
image: postgres:10-alpine
ports:
- "5555:5555"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
POSTGRES_DB: prod
when it finishes doing CMD node /dist/server.js ( which folder I build because I am using TYpescript )
it gets an error like this :
Cannot find module '/home/edupro/dist/server.js'
I have to try to change volumes in docker-compose.yml as well like this:
- /home/myapp/node_modules:/home/myapp/node_modules
or
- ./:/home/myapp/node_modules
but still the same. do I miss something ? or did wrong mount?
how is the correct way to resolve that?
you need to remove the volume sections from your compose since that will overwrite all the files you build in your dockerfile, so delete this:
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules
Between the following tutorials;
Dockerizing create-react-app
Developing microservices - Node, react & docker
I have been able to convert my nodejs app to dockerized micro-services which is up and running and connecting to services. However, my app uses Sqlite/Sequelize and this was working perfectly prior to dockerizing.
With the new setup, I get error;
/usr/src/app/node_modules/sequelize/lib/dialects/sqlite/connection-manager.js:31
throw new Error('Please install sqlite3 package manually');
Error: Please install sqlite3 package manually at new ConnectionManager
(/usr/src/app/node_modules/sequelize/lib/dialects/sqlite/connection-manager.js:31:15)
My question is;
Is it possible to use Sqlite3 with Docker
If so, anyone able to share sample docker-compose.yml and Dockerfile combo that works for this please.
My docker-compose.yml
version: '3.5'
services:
user-service:
container_name: user-service
build: ./services/user/
volumes:
- './services/user:/usr/src/app'
- './services/user/package.json:/usr/src/package.json'
ports:
- '9000:9000' # expose ports - HOST:CONTAINER
web-service:
container_name: web-service
build:
context: ./services/web
dockerfile: Dockerfile
volumes:
- './services/web:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000' # expose ports - HOST:CONTAINER
environment:
- NODE_ENV=development
depends_on:
- user-service
My user/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/package.json
RUN npm install
# start app
CMD ["npm", "start"]
My web/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#1.1.4
RUN npm install gulp -g
# start app
CMD ["npm", "start"]
Many thanks.
Got it. The issue was that my local node_modules were being copied to the host container. Hence in the sqlite3 lib/binding, node-v57-darwin-x64 was there instead of what is expected - node-v57-linux-x64. Hence the mess.
I updated the Dockerfiles and docker-compose.yml as follows:
My docker-compose.yml
services:
user-service:
container_name: user-service
build:
context: ./services/user/
dockerfile: Dockerfile
volumes:
- './services/user:/usr/src/app'
- '/usr/src/node_modules'
ports:
- '9000:9000' # expose ports - HOST:CONTAINER
web-service:
container_name: web-service
build:
context: ./services/web/
dockerfile: Dockerfile
volumes:
- './services/web:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000' # expose ports - HOST:CONTAINER
environment:
- NODE_ENV=development
depends_on:
- user-service
My user/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/node_modules/.bin` to $PATH
ENV PATH /usr/src/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/package.json
RUN npm install
# start app
CMD ["npm", "start"]
Helpful posts
Getting npm packages to be installed with docker-compose