Avoid restarting docker when code is changed - node.js

I'm using docker for the first time, so it could be a stupid question for you.
I configured: dockerfile and docker-compose.
Every time i change my code ( nodejs code ), I need to run the command:
docker-compose up --build
If i want to see the changes, I want to know if there is a way to update my code without have to run that command every time.
Thanks.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV NODE_PATH=/app/node_modules
EXPOSE 1337
CMD ["npm", "start"]
docker-compose:
version: '3'
services:
node_api:
container_name: app
restart: always
build: .
ports:
- "1337:1337"
links:
- mongo
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=test
ports:
- "27017:27017"
restart: always

You can mount the directory containing your source code in the container and use a tool such as nodemon, which will watch files and restart the application on changes.
See the article Docker Tips : Development With Nodemon for details.

Related

Docker run exit wit code 0 - Node.js & Postgres

I've build a small Node.js app with a Postgres database. I wanted to Dockerize it, and everything worked fine when I was using docker-compose, now I'm trying to put it on AWS EC2, and I'm using docker run, but it doesn't work.
Docker Run command:
docker run -d -p 80:4000 --name appname hubname/appname
the container starts and stops immediatly
I'm fairly new to Docker so I guess I'm missing something.
Please find my Dockerfile:
FROM node:14 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install
COPY . .
FROM base as production
ENV NODE_PATH=./build
RUN npm start
My docker-compose:
version: '3.7'
services:
appname-db:
image: postgres
container_name: appname-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- '5432:5432'
volumes:
- appname-db:/var/lib/postgresql/data
ts-node-docker:
build:
context: .
dockerfile: Dockerfile
target: production
volumes:
- ./src:/home/node/app/src
- ./nodemon.json:/home/node/app/nodemon.json
container_name: ts-node-docker
environment:
DB_HOST: ${DB_HOST}
DB_SCHEMA: ${DB_SCHEMA}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- appname-db:
condition: service_healthy
expose:
- '4000'
ports:
- '80:4000'
stdin_open: true
command: npm run start
volumes:
appname-db:
When I do:
docker ps -a I see that the container started and exited right after.
I can't find any logs whatsoever it's very frustrating.
Any help/pointer would be really appreciated.
Edit 1 - Some additional information:
When I'm using docker-compose we can see 2 containers:
However this is what I have with docker run:
You should use CMD and not RUN when you execute the npm run start command. CMD runs every time a container is started, while RUN is just executed once, at build time.
This means you start the container, but nothing else is started inside of it. So it exits because there's nothing more to execute. RUN was already executed last time you built the container.
Use RUN for npm install, otherwise it will try every time to reinstall node_modules you already have. But for the start command – do this instead: CMD ["npm", "run", "start"]
In case it could help you I show you my working docker-compose.yml and Dockerfile. Express API with a Postgres DB.
This is tested and works both on an Ubuntu server and on a dev machine (with docker-compose up for both environments). Note that there are two Dockerfiles (dev/prod).
You don't need to add password/user stuff in your docker-compose, you can just include your env_file and Docker/docker-compose sets everything up correctly for you.
version: "3.8"
services:
# D A T A B A S E #
db:
env_file:
- ./.env
image: postgres
container_name: "db"
restart: always
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./db-data/:/var/lib/postgresql/data/
# A P I #
api:
container_name: "api"
env_file:
- ./.env
build:
context: .
dockerfile: "DOCKERFILE.${NODE_ENV}"
ports:
- ${API_PORT}:${API_PORT}
depends_on:
- db
restart: always
volumes:
- /app/node_modules
- .:/usr/src/service
working_dir: /usr/src/service
# A D M I N E R #
adminer:
env_file:
- ./.env
container_name: "adminer"
image: adminer:latest
restart: always
ports:
- ${ADMINER_PORT}:${ADMINER_PORT}
depends_on:
- db
Dockerfile.dev:
FROM node:14.17.3
ENV NODE_ENV=dev
WORKDIR /
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "start"]
Dockerfile.prod is pretty much the same, just change the environment to prod.
I see is that you include the full path from your home directory when copying, try without that. Just copy from the location relative to your Dockerfile at /.

Nodemon not restarting in docker image

I am very new to docker and trying to get a working environment for local development. My issue is I can't seem to get nodemon to trigger correctly when there are changes. The nodemon config works outside of docker so I know that isn't the issue.
Here is what I have in my node app folder.
# Dockerfile.local
FROM node:16
WORKDIR /app
COPY package*.json .
RUN yarn install
COPY . .
RUN yarn build
CMD ["yarn", "watch"]
# docker-compose.yml
version: "3"
services:
db:
image: postgres:12.3
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
# ... my config
actions:
build:
context: ./action-handlers
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "hasura"
volumes:
- actions:/./app
environment:
# ...my confg
hasura:
ports:
- 8080:8080
- 9691:9691
build:
context: ./hasura
dockerfile: .docker/${DOCKERFILE}
depends_on:
- "db"
environment:
# ...my config
volumes:
db_data:
actions:
The docker image works perfectly when I run docker-compose build to build the image, then to run it using docker-compose up. Nodemon runs, the other two services run as expect so there isn't any issues there. It just doesn't restart when I make code changes. I have a feeling I am not using volumes correctly.
I have a feeling that - "./actions:/app/action-handers" is incorrect.

Docker react express ap not running after one process run

Hi Dockerized a reactjs and expressjs project, everything is worked good when i have written separate docker compose file.
But now i written one compose file
docker-compose-all-dev.yml file
version: '3.7'
services:
client:
container_name: react-dev
build:
context: ./client
dockerfile: Dockerfile.react-dev
ports:
- 3000:3000
server:
container_name: server-dev
build:
context: ./server
dockerfile: Dockerfile.server-dev
ports:
- 5000:5000
Now it's running client server only, why not running backend server?
But it works when i run it in two different files like this.
docker-compose-client.yml file:
version: '3.7'
services:
client:
container_name: react-dev
build:
context: ./client
dockerfile: Dockerfile.react-dev
ports:
- 3000:3000
and docker-compose-server.yml file
version: '3.7'
services:
server:
container_name: server-dev
build:
context: ./server
dockerfile: Dockerfile.server-dev
ports:
- 5000:5000
Can anyone tell me what is the possible issue of not running the both app when i run in one compose file? how can i solve it?
For your reference.
My Dockerfile-server-dev file
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
and my Dockerfile.react-dev file
FROM node:14.1-alpine as build
WORKDIR /app
COPY . /app
ENV PATH /app/node_modules/.bin:$PATH
RUN yarn config delete proxy
Run npm config rm proxy
RUN npm config rm https-proxy
RUN npm install
RUN npm start
I dont know what is the issue actually running two development server in one docker-compose file
There are at least few problems, your Dockerfile.react-dev is missing entrypoint and CMD parts, you should not start your server with RUN clause. Instead use Entrypoint and possibly CMD for starting it. Another problem is, that you are exposing different port on Dockerfile-server-dev than on your compose file.
solved the issue.
I just add this line: CMD ["npm", "start"] and removed npm start now it is working

Install all needed node modules inside Docker container, not just copying them

The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:

Environment variables not loading inside docker container

I have two services which are dockerized. The first service is the api service which depends on the mongo service. However, for some reason, the api service can't connect to the mongo service because the environment variables inside the container are not been passed around well. I have two variables in my .env namely
JWT_SECRET=somesecret
DB_URL_LOCAL=mongodb://mongo:27017/apii
I tried setting the environment variables in the docker-compose file, when I run docker-compose config I see the variables and their values displayed, but my application can't detect these variables. MongoDB keeps throwing UnhandledPromiseRejectionWarning: MongoParseError: URI malformed, cannot be parsed error.
I also tried to set the environment variables in the main Dockerfile but it still didn't work. Please I need some assistance. Here is a copy of my Dockerfile and docker-compose.yml file.
This is the Dockerfile
FROM node:12-alpine AS base
WORKDIR /usr/src/app
FROM base AS build
COPY package*.json .babelrc ./
RUN npm install
COPY ./src ./src
RUN npm run build
RUN npm prune --production
FROM base AS release
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
USER node
EXPOSE 4000
CMD [ "node", "./dist/server/server.js" ]
This is the docker-compose.yml file
version: '3.8'
services:
api:
build: .
image: api
working_dir: /usr/src/app
expose:
- 4000
ports:
- '4000:4000'
depends_on:
- mongo
mongo:
container_name: mongo
env_file:
- .env
environment:
- DB_URL_LOCAL=$DB_URL_LOCAL
- JWT_SECRET=$JWT_SECRET
image: mongo
volumes:
- './data:/data/db'
ports:
- 27017:27017
restart: always
Thank you.
In docker-compose.yml, environment variables should be declared in api service not mongo service with something like this MONGO_INITDB_DATABASE=${DATABASE}
Use this, env_file is an alternative, read more:
version: '3.8'
services:
api:
build: .
image: api
working_dir: /usr/src/app
expose:
- 4000
ports:
- '4000:4000'
env_file:
- ./secret.env
environment:
- DB_URL_LOCAL=${DB_URL_LOCAL}
- JWT_SECRET=${JWT_SECRET}
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- './data:/data/db'
ports:
- 27017:27017
restart: always

Resources