I am very new to docker and trying to get a working environment for local development. My issue is I can't seem to get nodemon to trigger correctly when there are changes. The nodemon config works outside of docker so I know that isn't the issue.
Here is what I have in my node app folder.
# Dockerfile.local
FROM node:16
WORKDIR /app
COPY package*.json .
RUN yarn install
COPY . .
RUN yarn build
CMD ["yarn", "watch"]
# docker-compose.yml
version: "3"
services:
db:
image: postgres:12.3
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
# ... my config
actions:
build:
context: ./action-handlers
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "hasura"
volumes:
- actions:/./app
environment:
# ...my confg
hasura:
ports:
- 8080:8080
- 9691:9691
build:
context: ./hasura
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "db"
environment:
# ...my config
volumes:
db_data:
actions:
The docker image works perfectly when I run docker-compose build to build the image, then to run it using docker-compose up. Nodemon runs, the other two services run as expect so there isn't any issues there. It just doesn't restart when I make code changes. I have a feeling I am not using volumes correctly.
I have a feeling that - "./actions:/app/action-handers" is incorrect.
Related
I've build a small Node.js app with a Postgres database. I wanted to Dockerize it, and everything worked fine when I was using docker-compose, now I'm trying to put it on AWS EC2, and I'm using docker run, but it doesn't work.
Docker Run command:
docker run -d -p 80:4000 --name appname hubname/appname
the container starts and stops immediatly
I'm fairly new to Docker so I guess I'm missing something.
Please find my Dockerfile:
FROM node:14 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install
COPY . .
FROM base as production
ENV NODE_PATH=./build
RUN npm start
My docker-compose:
version: '3.7'
services:
appname-db:
image: postgres
container_name: appname-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- '5432:5432'
volumes:
- appname-db:/var/lib/postgresql/data
ts-node-docker:
build:
context: .
dockerfile: Dockerfile
target: production
volumes:
- ./src:/home/node/app/src
- ./nodemon.json:/home/node/app/nodemon.json
container_name: ts-node-docker
environment:
DB_HOST: ${DB_HOST}
DB_SCHEMA: ${DB_SCHEMA}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- appname-db:
condition: service_healthy
expose:
- '4000'
ports:
- '80:4000'
stdin_open: true
command: npm run start
volumes:
appname-db:
When I do:
docker ps -a I see that the container started and exited right after.
I can't find any logs whatsoever it's very frustrating.
Any help/pointer would be really appreciated.
Edit 1 - Some additional information:
When I'm using docker-compose we can see 2 containers:
However this is what I have with docker run:
You should use CMD and not RUN when you execute the npm run start command. CMD runs every time a container is started, while RUN is just executed once, at build time.
This means you start the container, but nothing else is started inside of it. So it exits because there's nothing more to execute. RUN was already executed last time you built the container.
Use RUN for npm install, otherwise it will try every time to reinstall node_modules you already have. But for the start command – do this instead: CMD ["npm", "run", "start"]
In case it could help you I show you my working docker-compose.yml and Dockerfile. Express API with a Postgres DB.
This is tested and works both on an Ubuntu server and on a dev machine (with docker-compose up for both environments). Note that there are two Dockerfiles (dev/prod).
You don't need to add password/user stuff in your docker-compose, you can just include your env_file and Docker/docker-compose sets everything up correctly for you.
version: "3.8"
services:
# D A T A B A S E #
db:
env_file:
- ./.env
image: postgres
container_name: "db"
restart: always
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./db-data/:/var/lib/postgresql/data/
# A P I #
api:
container_name: "api"
env_file:
- ./.env
build:
context: .
dockerfile: "DOCKERFILE.${NODE_ENV}"
ports:
- ${API_PORT}:${API_PORT}
depends_on:
- db
restart: always
volumes:
- /app/node_modules
- .:/usr/src/service
working_dir: /usr/src/service
# A D M I N E R #
adminer:
env_file:
- ./.env
container_name: "adminer"
image: adminer:latest
restart: always
ports:
- ${ADMINER_PORT}:${ADMINER_PORT}
depends_on:
- db
Dockerfile.dev:
FROM node:14.17.3
ENV NODE_ENV=dev
WORKDIR /
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "start"]
Dockerfile.prod is pretty much the same, just change the environment to prod.
I see is that you include the full path from your home directory when copying, try without that. Just copy from the location relative to your Dockerfile at /.
I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app
The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
I'm using docker for the first time, so it could be a stupid question for you.
I configured: dockerfile and docker-compose.
Every time i change my code ( nodejs code ), I need to run the command:
docker-compose up --build
If i want to see the changes, I want to know if there is a way to update my code without have to run that command every time.
Thanks.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV NODE_PATH=/app/node_modules
EXPOSE 1337
CMD ["npm", "start"]
docker-compose:
version: '3'
services:
node_api:
container_name: app
restart: always
build: .
ports:
- "1337:1337"
links:
- mongo
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=test
ports:
- "27017:27017"
restart: always
You can mount the directory containing your source code in the container and use a tool such as nodemon, which will watch files and restart the application on changes.
See the article Docker Tips : Development With Nodemon for details.
I'm relatively new with Docker and I just created an Node.js application that should connect with other services also running on Docker.
So I get the source code and a Dockerfile to setup this image and a docker-compose to orchestrate the environment.
I had a few problems in the beginning so I just updated my source code and found out that it's not getting updated in the next build of docker-compose.
For example I commented all the lines that connect to Redis and MongoDB. I run the application locally and it's fine. But when I create it again in a container, I get the errors "Connection refused..."
I tried many things and this is what i get at the momment:
Dockerfile
FROM node:9
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8090
docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "8090:8090"
container_name: app
redis:
image: redis:latest
ports:
- "6379:6379"
container_name: redis
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
up.sh
sudo docker stop app
sudo docker rm app
docker-compose build --no-cache app
sudo docker-compose up --force-recreate
Any ideas on what could be the problem? Why doesn't it use the current source code? It is using some sort of cache.