Docker compose cant find package.json in path - node.js

I am trying to dockerize my node app but I can't seem to understand why it can't find the package.json file because the package.json file is in the same path as the docker compose and dockerfile. Here is the folder structure:
Below is the dockerfile:
Dockerfile
FROM node:12
# RUN mkdir -p ./usr/src/app/
COPY . /usr/src/app/
WORKDIR /usr/src/app/
# ADD package.json /usr/src/app/package.json
COPY package*.json ./
RUN yarn
RUN yarn global add pm2
COPY . .
EXPOSE 3005
EXPOSE 9200
CMD yarn start:dev
And the docker compose file:
docker-compose.yml
version: "3.6"
services:
api:
image: node:12.13.0-alpine
container_name: cont
build:
context: .
dockerfile: Dockerfile
ports:
- 3005:3005
environment:
- JWT_TOKEN_SECRET=ajsonwebtokensecret
- MONGO_URI=mongodb://localhost:27017/em
- PORT=3005
- MAIL_USER=******
- MAIL_PASS=******
- FORGOT_PASS_EMAIL=noreply#em.com
- CLIENT_SIDE_URL=http://localhost:3001
- ELASTICSEARCH_URI=http://localhost:9200
volumes:
- .:/usr/src/app/
command: yarn start:dev
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
elasticsearch:
container_name: em-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- esnet
volumes:
esdata:
networks:
esnet:
When i try to run docker compose up i get this error
cont | yarn run v1.19.1
cont | error Couldn't find a package.json file in "/"
cont | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this
command.
cont exited with code 1
I have tried this, but all solutions I have tried don't work. Does anyone have a solution to this?

Relative pathing. Note that the working COPY command is COPY . .. The . there means 'the current directory'. So what you need is:
COPY ./package*.json ./
You would simplify your debugging by running docker build -t scratch . to eliminate docker-compose from the equation. Always drive it down to absolute minimal reproducer to isolate the exact issue.

Related

NestJs Docker container shows errors

I have dockerized a NestJs application. But running it shows
Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error
and sometimes it shows
Cannot find module 'webpack'
Strangely, it works fine on Windows but the errors come up on mac and amazon linux.
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:16-alpine As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
###################
# BUILD FOR PRODUCTION
###################
FROM node:16-alpine As build
WORKDIR /usr/src/app
COPY package*.json ./
COPY --from=development /usr/src/app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV production
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:16-alpine As production
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
# Only will build development stage from our dockerfile
target: development
env_file:
- .env
volumes:
- api-data:/usr/src/app
# Run in dev Mode: npm run start:dev
command: npm run start:dev
ports:
- 3000:3000
depends_on:
- postgres
restart: 'always'
networks:
- prism-network
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: 'prism'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'mysecretpassword'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
healthcheck:
test:
[
'CMD-SHELL',
'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
]
interval: 10s
timeout: 5s
retries: 5
networks:
- prism-network
networks:
prism-network:
volumes:
api-data:
postgres-data:
I am stumped, why it isn't working.
Change (or check) two things in your setup:
In your docker-compose.yml file, delete the volumes: block that overwrites your application's /usr/src/app directory
services:
api:
build: { ... }
# volumes: <-- delete
# api-data:/usr/src/app <-- delete
volumes:
# api-data: <-- delete
postgres-data: # <-- keep
Create a .dockerignore file next to the Dockerfile, if you don't already have one, and make sure it includes the single line
node_modules
What's going on here? If you don't have the .dockerignore line, then the Dockerfile COPY . . line overwrites the node_modules tree from the RUN npm ci line with your host's copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.
The volumes: block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire /usr/src/app tree in the image – in other words, you're running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you're running the image, and you have the same files, but they're actually coming out of the volume. If you change the image the volume does not get updated, so you're still running the old code.
Without the volumes: block, you're running the code out of the image, which is a standard Docker setup. You shouldn't need volumes: unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.
Try this Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY yarn.lock ./
COPY package.json ./
RUN yarn install
COPY . .
RUN yarn build
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: "3.7"
services:
service_name:
container_name: orders_service
image: service_name:latest
build: .
env_file:
- .env
ports:
- "3001:3001"
volumes:
- .:/data
- /data/node_modules
i wouldn't delete volumes because of the hot reloading.
try this in the volumes section to be able to save (persist) data
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app

Why my docker-compose volume doesn't detect changement

I'm learning docker & docker-compose i have a vueJS App and a nestjs api that i try to dockerize
what i want to do is to setup volume so when i'll change something in the file i don't have to rebuild again this is what i did:
Docker-compose.yml (in a different directory)
services:
front:
build: ../ecommerce-front # I tried to add :/app but my container doesn't find app/package.json
ports:
- "8080:8080"
volumes:
- ../ecommerce-front/src
back:
build: ../ecommerce-back
ports:
- "3000:3000"
depends_on:
- mysql
volumes:
- ../ecommerce-back/src
mysql:
image: mysql:5.7
ports:
- 3306:3306
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: tools
MYSQL_PASSWORD: root
backend Dockerfile:
FROM node:14 AS builder
WORKDIR /app
COPY ./package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM node:10-alpine
WORKDIR /app
COPY --from=builder /app ./
CMD ["npm", "run", "start:dev"]
i have no error, my services is running well but when i make some changes to my back there's no changement in the container
What did i do wrong ?
Thanks by advance
Thanks to richardsefton it seems i needed to add node_modules as volume to my two services like this:
volumes:
- ../ecommerce-front:/app
- '/app/node_modules'
volumes:
- ../ecommerce-back:/app
- '/app/node_modules'

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

Nest js Docker Cannot find module dist/main

I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.

Install all needed node modules inside Docker container, not just copying them

The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:

Resources