Docker-compose - Nodejs can not find Mongo service when bind mount - node.js

Here is my docker-compose.yml, when I comment the volumes code of "khaothi-manager" my services work correctly. But when uncomment it, my Node service throw an error that it can not connect to Mongo
version: "3.8"
services:
mongo:
image: mongo
restart: always
env_file: ./.env
ports:
- $MONGO_LOCAL_PORT:$DB_PORT
volumes:
- ./data:/data/db
networks:
- hm_khaothi
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
build: ./admin
env_file: ./.env
links:
- mongo
- khaothi-resource
ports:
- $MANAGER_PORT:$MANAGER_PORT
environment:
- MANAGER_HOST=$MANAGER_HOST
- MANAGER_PORT=$MANAGER_PORT
- RESOURCE_HOST=khaothi-resource
- RESOURCE_PORT:$RESOURCE_PORT
- DB_HOST=mongo
- DB_NAME=$DB_NAME
- DB_PORT=$DB_PORT
networks:
- hm_khaothi
My Dockerfile
# syntax=docker/dockerfile:1
FROM node:14-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
This is the error
(node:37) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: connection timed out
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:807:32)
at /app/node_modules/mongoose/lib/index.js:342:10
...

It worked correctly when I add another volume /app/node_modules
khaothi-manager:
container_name: khaothi-manager
image: khaothi-manager
restart: always
volumes:
- ./admin:/app
- /app/node_modules

Related

ModuleNotFoundError: No module named 'api_yamdb.wsgi'

When deploying a project, the web container crashes with the No module named 'api_yamdb.wsgi' error. Already spent a lot of time but could not figure out what exactly needs to be done to solve this problem. Please help!)
My Dockerfile:
FROM python:3.7-slim
WORKDIR /app
COPY ./api_yamdb/requirements.txt .
RUN pip3 install -r requirements.txt --no-cache-dir
COPY . .
CMD ["gunicorn", "api_yamdb.wsgi:application", "--bind", "0:8000" ]
My docker-compose:
version: '3.8'
services:
db:
image: postgres:13.0-alpine
volumes:
- /var/lib/postgresql/data/
env_file:
- ./.env
web:
image: porter4ch/yamdb_final:latest
restart: always
volumes:
- static_value:/app/static/
- media_value:/app/media/
depends_on:
- db
env_file:
- ./.env
nginx:
image: nginx:1.21.3-alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- static_value:/var/html/static/
- media_value:/var/html/media/
depends_on:
- web
volumes:
static_value:
media_value:

Changing the source code does not live update using docker-compose and volumes on mern stack

I would like to implement a hot reloading functionality from development evinronement such that when i change anything in the source code it will reflect the changes up to the docker container by mounting the volume and hence see the changes live in my localhost.
Below is my docker-compose file
version: '3.9'
services:
server:
restart: always
build:
context: ./server
dockerfile: Dockerfile
volumes:
# don't overwrite this folder in container with the local one
- ./app/node_modules
# map current local directory to the /app inside the container
#This is a must for development in order to update our container whenever a change to the source code is made. Without this, you would have to rebuild the image each time you make a change to source code.
- ./server:/app
# ports:
# - 3001:3001
depends_on:
- mongodb
environment:
NODE_ENV: ${NODE_ENV}
MONGO_URI: mongodb://${MONGO_ROOT_USERNAME}:${MONGO_ROOT_PASSWORD}#mongodb
networks:
- anfel-network
client:
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./app/node_modules
- ./client:/app
# ports:
# - 3000:3000
depends_on:
- server
networks:
- anfel-network
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
volumes:
# for persistence storage
- mongodb-data:/data/db
networks:
- anfel-network
# mongo express used during development
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongodb
ME_CONFIG_BASICAUTH_USERNAME: root
ME_CONFIG_BASICAUTH_PASSWORD: root
volumes:
- mongodb-data
networks:
- anfel-network
nginx:
restart: always
depends_on:
- server
- client
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '8080:80'
networks:
- anfel-network
# volumes:
# - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
anfel-network:
driver: bridge
volumes:
mongodb-data:
driver: local
Any suggestions would be appreciated.
You have to create a bind mount, this can help you

Docker-Compose File Invalid

I am using Docker and Docker-compose for the first time and I am getting this Error ERROR: The Compose file is invalid because: Service volumes have neither an image nor a build context specified. At least one must be provided. but my docker-compose file has a build context specified. I tried moving the build context around different places but still no headway. Here is my docker-compose file
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
volumes:
nodemodules:
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
What exactly could I be doing wrong? Thank you.
As #DavidMaze wrote:
The lines called volumes: and nodemodules: are wrong indented. They are at the same level as other services, but are empty. You probably want to delete these lines.
Take a look: docker-compose-file.
I was able to fix it after some days of googling. Apparently, all I had to do was to put
volumes:
nodemodules:
exactly like that at the last lines of the file and remove them from the services mapping. So my file became like this at the end of the day.
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
volumes:
nodemodules:

Can't get docker-compose networking to work

I'm trying to make requests to my server container from my client app in another container. The Docker Compose docs state that the network is setup automatically, so shouldn't all ports be accessible from all containers? When I make a curl request to port 4000 from outside of the container (in a fresh terminal), it works. However when I enter the client container (selektor-client) and try the same request, it fails.
curl --request POST http://localhost:4000/api/music
What am I doing wrong?
docker-compose.yaml:
version: "3"
services:
client:
container_name: selektor-client
restart: always
build: ./client
ports:
- "3000:3000"
volumes:
- ./client/:/client/
- /client/node_modules/
command: ["yarn", "start"]
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
mongo:
container_name: selektor-mongo
command: mongod --noauth
build: .
restart: always
volumes:
# - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- data-volume:/data/db
ports:
- "27017:27017"
volumes:
data-volume:
When you call containers of the same stack each other, the host name is, by default, the service name, and the port is the internal port: <servicename>:<internal_port>. So, based on this part of your example:
version: "3"
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
The url you client have to use to reach the server is http://server:4000
I had the same problem . my solution was registering network and giving subnet mast to the network and register the static ip to each container :
version: "3"
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
mongodb:
image: mongo:4.2.1
container_name: mongo
command: mongod --auth
hostname: mongo
networks:
my_network:
ipv4_address: 172.20.0.5
volumes:
- /data/db/mongo:/data/db
ports:
- "27017:27017"
rabitmq:
hostname: rabbitmq
container_name: rabbitmq
image: rabbitmq:latest
networks:
my_network:
ipv4_address: 172.20.0.3
volumes:
- /var/lib/rabbitmq:/data/db
ports:
- "5672:5672"
- "15672:15672"
restart: always
you can access through IP addresses

Docker compose error to connect postgres database with Node API

I am having the connection error with the docker-composer when I try to access a postgres database through an API in Node.
I'm using Sequelize as ORM to acess the database. But I'dont know what happened.
docker-compose.yml:
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
networks:
- api-network
command: npm run start:dev
postgres-db:
expose:
- ${PORT_SERVICE}
ports:
- ${PORT_SERVICE}:5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
networks:
- api-network
container_name: postgres-db
image: postgres:10
networks:
api-network:
driver: bridge
volumes:
pgReportData:
driver: local
node_modules:
.env:
NODE_ENV=development
PORT=30780
HOST_SERVICE=postgres-db
DATABASE_SERVICE=base
USER_SERVICE=user
PASSWORD_SERVICE=password
DIALECT=postgres
PORT_SERVICE=5444
api-docker.dockerfile:
FROM node:12
WORKDIR /src
COPY . .
COPY --chown=node:node . .
USER node
RUN npm install
EXPOSE $PORT
ENTRYPOINT ["npm", "run", "start:dev"]
And when I run:
docker-compose up
I'm getting this error:
Any ideia?
Can someone help me ??
If node is the only app that connects to postgre-db you can remove the networks, and expose the postgredb running port (5432). To connect to the db you can simply use the container name as host.
Connection String: "postgres://YourUserName:YourPassword#postgres-db:5432/YourDatabase";
version: '3.5'
services:
api-service:
build:
context: .
dockerfile: ./api-docker.dockerfile
image: api-service
container_name: api-service
restart: always
env_file: .env
environment:
- NODE_ENV=$NODE_ENV
ports:
- ${PORT}:3000
volumes:
- .:/home/node/api
- node_modules:/home/node/api/node_modules
depends_on:
- postgres-db
command: npm run start:dev
postgres-db:
expose:
- 5432
restart: always
env_file: .env
volumes:
- pgReportData:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${USER_SERVICE}
POSTGRES_PASSWORD: ${PASSWORD_SERVICE}
POSTGRES_DB: ${DATABASE_SERVICE}
container_name: postgres-db
image: postgres:10
volumes:
pgReportData:
driver: local
node_modules:

Resources