I am running a docker container that is containing my backend code as a volume:
docker-compose.yml
version: '3'
services:
auth:
container_name: ${AUTH_CONTAINER}
build: ./modules/auth
working_dir: /usr/app
command: "npm install && npm start"
volumes:
- ../backend/modules/auth:/usr/app
expose:
- 9229
ports:
- 9229:9229
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY entrypoint.sh .
ENTRYPOINT ["sh", "/entrypoint.sh"]
Entrypoint:
#!/usr/bin/env bash
exec "$#"
when I run docker-compose up, the first step is always to run yarn. I get an error saying that I should not use both npm and yarn, because I currently am using npm for the project. Is there a way to have node:alpine only use npm only (I know its installed) and then call npm install when docker-compose up is ran?
EDIT: image of console:
EDIT2: Package.json
{
"name": "auth",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
}
}
For anyone that gets to this point- and is wondering why this is happening. Ensure that you run the docker-compose -- build FIRST before running docker-compose up that is the problem I was running into.
Dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY entrypoint.sh /
ENTRYPOINT ["sh", "/entrypoint.sh"]
entrypoint:
exec "$#"
docker-compose:
version: '3'
services:
auth:
container_name: "Auth"
build: "./auth"
working_dir: /usr/app
command: "npm start"
environment:
- NODE_ENV=local
volumes:
- ../backend/modules/auth:/usr/app
expose:
- 9299
ports:
- 9229:9229
Related
I'm trying to run Nodemon package in a Docker Network with two services, NodeJs and MongoDB.
When I run the app with npm start, nodemon works: I can connect to localhost:3000 and I have the changes in real time. But as soon as I run npm run docker (docker-compose up --build), I can connect to localhost:3000 but I'm not able to see real-time changes on my application nor console.
docker-compose.yml
version: '3.7'
services:
app:
container_name: NodeJs
build: .
volumes:
- "./app:/usr/src/app/app"
ports:
- 3000:3000
mongo_db:
container_name: MongoDB
image: mongo
volumes:
- mongo_db:/data/db
ports:
- 27017:27017
volumes:
mongo_db:
dockerfile
FROM node:alpine
WORKDIR /app
COPY package.json /.
RUN npm install
COPY . /app
CMD ["npm", "run", "dev"]
package.json
{
"name": "projectvintedapi",
"version": "0.0.0",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node index.js",
"dev": "nodemon index.js",
"docker": "docker-compose up --build"
},
"license": "ISC",
"dependencies": {
"dotenv": "^16.0.0",
"express": "^4.17.3",
"mongodb": "^4.5.0",
"nodemon": "^2.0.15"
}
}
docker-compose.yml tells Docker to boot a container, the Node.js application. It also tells Docker to mount a host volume:
volumes:
- "./app:/usr/src/app/app"
As a result, Docker will mount the ./app directory on your laptop, which contains your code, into the container at /usr/src/app/app.
Once you’ve changed your code on your laptop/desktop, nodemon detects that change and restarts the process without rebuilding the container. To make this happen, you need to tell Docker to set the entrypoint to nodemon. Do that in the Dockerfile:
FROM node:alpine
WORKDIR /app
COPY . /app
RUN npm install -g nodemon
RUN npm install
#Give the path of your endpoint
ENTRYPOINT ["nodemon", "/usr/src/app/server.js"]
CMD ["npm", "run", "dev"]
With host volumes and nodemon, your code sync is almost instantaneous.
Fixed it: it was for bind mount
dockerfile
FROM node:16.13.2
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . ./
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3.7'
services:
app:
container_name: NodeJs
build: .
command: npm run dev
volumes:
- .:/app
ports:
- 3000:3000
mongo_db:
container_name: MongoDB
image: mongo
volumes:
- mongo_db:/data/db
ports:
- 27017:27017
volumes:
mongo_db:
I'm trying to use docker compose to put in containers a node.js server and a mongodb database. I'm also trying to use yarn as my package-manager. My project is codded in typescript and my Mongodb is used as a service for my server.
For what I have at the moment, it works just fine, but I'm not sure if my implementation is even good. When I run docker-compose up, the containers and the images of node.js and mongodb are created, and the containers are running on the specified ports.
The problem is that when I kill the mongodb container, my nodejs server still has access to the database, like if my container mongodb container is still running. I would guess that it's because my mongodb server is also running on the same container as my node.js server.
This is my Dockerfile:
FROM node:lts-alpine
ADD package.json /app/package.json
ADD yarn.lock /app/yarn.lock
WORKDIR /app
# Installing packages
RUN yarn
ADD . /app
ENV NODE_ENV=production
# Building TypeScript files
RUN yarn build
CMD ["node", "./dist/app.js"]
These are the scripts in my package.json file:
"scripts": {
"lint": "eslint src/**/*.ts",
"format": "eslint src/**/*.ts --fix",
"start": "tsc && node --unhandled-rejections=strict ./dist/app.js",
"debug": "export DEBUG=* && npm run start",
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "ts-node src/app.ts",
"build": "rm -rf build && tsc -p ."
},
And this is my docker-compose.yml file:
version: "3"
services:
app:
container_name: app
restart: always
build:
context: .
dockerfile: Dockerfile
env_file: .env
volumes:
- ./src:/app/src
environment:
- COSMOSDB_HOST=${COSMOSDB_HOST}
- COSMOSDB_PORT=${COSMOSDB_PORT}
- COSMOSDB_DBNAME=${COSMOSDB_DBNAME}
- COSMOSDB_USER=${COSMOSDB_USER}
- COSMOSDB_PASSWORD=${COSMOSDB_PASSWORD}
ports:
- "4000:4000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
volumes:
- ./data:/data/db
I don't seem to understand what's wrong with my files...
Can you help me ? Also, would you have some recommendation to give me to optimize the build or anything else ? Thank you !
After I run docker-compose up -d --build, I run docker images, it shows:
REPOSITORY TAG IMAGE ID CREATED SIZE
test-tets-test-server_my-web latest 2a3f05e387a7 1 minutes ago 2.81GB
But When I run docker run -it 2a3f05e387a7 sh and look for the files, it seems that the files are not updating and still in old version.
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: .
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server
ports:
- "3030:3030"
depends_on:
- test-db
package.json
...
"scripts": {
"test": "npm run lint && npm run mocha",
"lint": "eslint src/. test/. --config .eslintrc.json --fix",
"dev": "nodemon --legacy-watch src/",
"start": "node src/",
},
...
Since docker-compose up -d --build does not recreate, you may still see the old file, or they may be cache.
Run docker-compose up -d --build --force-recreate to force it recreate the image.
I read many threads sabout this but no one solves anything.
Some say you have to add --legacy-watch (or -L) to the nodemon command.
Others shows several different configurations and apparently nodody really knows what you gotta do to achieve server restarting when a file change at the volume inside a docker container.
Here my configuration so far:
Dockerfile:
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# install nodemon globally
RUN npm install nodemon -g
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Exports
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3.1'
services:
node:
build: .
user: "node"
volumes:
- ./:/usr/src/app
ports:
- 3000:3000
depends_on:
- mongo
working_dir: /usr/src/app
environment:
- NODE_ENV=production
expose:
- "3000"
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
package.json
{
"name": "node-playground",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon -L"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"ejs": "^2.7.1",
"express": "^4.17.1",
"mongoose": "^5.7.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
I tried many different setups as well. Like not installing globally nodemon but only as a project dependency. And also running the command at the docker-compse.yml, and i believe many others I don't remember right now. Nothing.
If someone has any cetainty about this, please help. Thanks!!!!
Try it!
This worked for me:
Via the CLI, use either --legacy-watch or -L for short. More informations here.
I went ahead and created an example container and repo to show how you can achieve this..
Just follow the steps below, which outline how to use nodemon inside of a Docker container.
Docker Container: at DockerHub
Source Code: at GitHub
package.json:
{
"name": "nodemon-docker-test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start:express": "node ./index.js",
"start": "nodemon"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
Dockerfile:
FROM node:slim
WORKDIR /app
COPY package*.json ./
RUN apt-get update
RUN npm install
COPY . /app
# -or-
# COPY . .
EXPOSE 1337
CMD ["npm", "start"]
docker-compose.yml: (if you are using it)
version: "3"
services:
nodemon-test:
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
How to reproduce:
Step 1 USING DOCKER RUN: SKIP IF YOU ARE USING DOCKER COMPOSE (go to step 1 below if you are) pull down example docker container
docker run -d --name "nodemon-test" -p 1337:1337 oze4/nodemon-docker-test
Step 1 USING DOCKER-COMPOSE:
See the docker-compose.yml file above for configuration
cd /path/to/dir/that/has/your/compose/file
docker-compose up -d
Step 2: verify the app works
http://localhost:1337
Step 3: check the container logs, to get a baseline
docker logs nodemon-test
Step 4: I have included a bash script to make editing a file as simple as possible. We need to pop a shell on the container, and run the bash script (change.sh)
docker exec -it nodemon-test /bin/bash
bash change.sh
exit
Step 5: check the logs again to verify changes were made and that nodemon restarted
docker logs nodemon-test
As you can see by the last screenshot, nodemon successfully restarted after changes were made!
All right
Thanks a lot to MattOestreich for your answer.
Now i got it working, I don't know what it was, i did follow your set up but of course i'm using docker-compose and i also stripped some things out of it. I'm also not calling mongo image anymore since i setup the db in an Mongodb atlas cluster.
my actual config:
Dockerfile:
FROM node:12.10
WORKDIR /app
COPY package*.json ./
RUN apt-get update
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "start"]
docker-compse.yml
version: '3.1'
services:
node:
build: .
volumes:
- ./:/app
ports:
- 3000:3000
working_dir: /app
expose:
- "3000"
package.json
{
"name": "node-playground",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"body-parser": "^1.19.0",
"dotenv": "^8.1.0",
"ejs": "^2.7.1",
"express": "^4.17.1",
"mongoose": "^5.7.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
thanks Matt again and i hope this thread helps people in need like me.
Nodemon depends on Chokidar and a potential solution is to make it use polling by setting CHOKIDAR_USEPOLLING environment variable to true.
For example you can do this in docker-compose.yml:
services:
api1:
build:
context: .
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ${PWD}:/app
ports:
- 80:3000
environment:
- CHOKIDAR_USEPOLLING=true
Change in Docker file
CMD ["npm", "start"]
Change start script
"start": "nodemon -L server.js"
Build Command
docker build . -t <containername>
Use this command to run the docker container
docker run -v $(pwd):/app -p 8080:8080 -it <container Id>
-v = Volumes . the preferred mechanism for persisting data generated by and used by Docker containers.
/app = WORKDIR /app
$(pwd) = PWD is a variable set to the present working directory. So $(pwd) gets the value of the present working directory
I have a cannot find module error using docker. I'm not sure what is going on. I've tried deleting 'volumes' in the docker-compose file. I have also tried rming the image and running docker-compose up again. I'm really at a loss as to what is happening here. Any help would be appreciated.
docker-compose
version: '2'
services:
nginx:
build: "./nginx"
links: ["node1", "node2"]
ports: ["80:80"]
node1:
build:
context: "./node"
args:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
NODE_PATH: "lib"
NODE_ENV: "production"
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
links: ["postgres", "mongo"]
ports: ["5000:5000"]
node2:
build:
context: "./node"
args:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
NODE_PATH: "lib"
NODE_ENV: "production"
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
links: ["postgres", "mongo"]
ports: [5000]
postgres:
image: "postgres"
environment:
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "password"
ports: ["5432:5432"]
mongo:
image: mongo
ports: ['27017:27017']
Dockerfile
FROM node
# Set up environment
RUN npm config set proxy $http_proxy
RUN npm config set https-proxy $https_proxy
# Install app
ENV INSTALL_PATH="/opt/node"
RUN ["mkdir", "-p", "$INSTALL_PATH"]
ADD package.json $INSTALL_PATH/package.json
ADD index.js $INSTALL_PATH/index.js
# Define working directory
WORKDIR $INSTALL_PATH
# Install dependencies
RUN npm install -g nodemon
RUN npm install
# Expose port
EXPOSE 5000
# Run app
ENTRYPOINT npm start
index.js
require('babel-core/register')()
require('babel-polyfill')
require('./bin/server.js')
package.json
{
"name": "no-commerce",
"version": "0.0.1",
"description": "API for No-Commerce",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "./node_modules/.bin/nodemon index.js",
"test": "NODE_ENV=test ./node_modules/.bin/mocha --compilers js:babel-register --require babel-polyfill",
"lint": "eslint src/**/*.js",
"docs": "./node_modules/.bin/apidoc -i src/ -o docs"
},
Error: Cannot find module './bin/server.js'
File Structure:
- Root
-docker-compose
-node
-package.json
-bin
-server.js
-index.js
-Dockerfile
-nginx
For starters, you are only adding these files to the container in your docker file:
ADD package.json $INSTALL_PATH/package.json
ADD index.js $INSTALL_PATH/index.js
You need to add server.js to $INSTALL_PATH/bin