I am having an issue with docker-compose containers. There is certainly an answer somewhere but it seems I don't have the vocabulary necessary to find it, sorry.
Here is my docker-compose file :
version: '2'
services:
gateway-linkcs:
image: node:7
ports:
- 3000:3000
volumes:
- ../Gateway:/usr/src/app
working_dir: /usr/src/app
command: yarn startdev
networks:
- back-linkcs
postgres-user-linkcs:
build:
context: ../User
dockerfile: docker/Dockerfile
networks:
- back-linkcs
dev-user-linkcs:
image: node:7
ports:
- 3001:3001
volumes_from:
- postgres-user-linkcs
volumes:
- ../User:/usr/src/app
working_dir: /usr/src/app
command: /usr/src/app/docker/scripts/start-node.sh
environment:
- NODE_ENV=development # change to 'production' for prod
- GATEWAY_ADDRESS=gateway-linkcs:3000
networks:
- back-linkcs
networks:
back-linkcs:
My node application in dev-user is suppose to use gateway as a proxy to connect an api on the internet, and dev-user actually finds gateway, and is able to connect to it (using hostname 'gateway-linkcs')
However, my gateway app cannot connect to the external API :
Error: getaddrinfo ENOTFOUND api.xxx api.xxx:80
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:73:26)
I have tested starting the gateway outside of docker, (node index.js) and it works perfectly well.
I think I am missing something with docker-compose networks, can someone help ?
Thank you very much, Giltho.
Related
I am building a Node JS Web application. I am using Postgres SQL and Sequelize for the database.
I have the docker-compose.yaml with the following content.
version: '3.8'
services:
web:
container_name: web-server
build: .
ports:
- 4000:4000
restart: always
depends_on:
- postgres
postgres:
container_name: app-db
image: postgres:11-alpine
ports:
- '5431:5431'
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=babe_manager
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
dockerfile: ./pgconfig/Dockerfile
volumes:
app-db-data:
I am using the following credentials to connect to the database.
DB_DIALECT=postgres
DB_HOST=postgres
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
When the application tries to connect to the database, I am getting the following error.
getaddrinfo ENOTFOUND postgres
I tried using this too.
DB_DIALECT=postgres
DB_HOST=localhost
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
This time I am getting the following error.
connect ECONNREFUSED 127.0.0.1:5431
My app server is not within the docker-compose.yaml. I just start it using "nodemon server.js". How can I fix it?
You need to make sure your Postgres container and your Node are on the same network. After they reside in the common network you can then use docker's DNS to resolve the database from the Node application in your case eighter postgres or test-db for hostname should work. More on container communication over docker networks.
I'm building a react, Node App and I'm using docker-compose my docker compose definition looks like this:
version: "3"
services:
frontend:
stdin_open: true
container_name: firestore_manager
build:
context: ./client/firestore-app
dockerfile: DockerFile
image: rasilvap/firestore_manager
ports:
- "3000:3000"
volumes:
- ./client/firestore-app:/app
environment:
- BACKEND_HOST=backend
- BACKEND_PORT=8081
depends_on:
- backend
backend:
container_name: firestore_manager_server
build:
context: ./server
dockerfile: Dockerfile
image: rasilvap/firestore_manager_server
ports:
- "8081:8081"
volumes:
- ./server:/app
environment:
- BACKEND_HOST=backend
- BACKEND_PORT=8081
I'm trying to access to the NodeJs backend endpoints using the backend prefix defined in the docker-compose file, but I'm getting an Error: getaddrinfo ENOTFOUND firestore_manager_server, the same is happening using the container name: firestore_manager_server.
As you can see in the next urls:
firestore_manager_server:8081/firestore?collection=test&field=nombre&value=xxxx
backend:8081/firestore?collection=test&field=nombre&value=xxxx
I don't have any problem using localhost.
The next is the result of the docker ps command:
Any ideas?
I believe your url should be
frontend:8081/firestore?collection=test&field=nombre&value=xxxx
instead of
firestore_manager_server:8081/firestore?collection=test&field=nombre&value=xxxx
I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.
By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.
-- Docker FIle
FROM node:4.4.2
EXPOSE 3000
ENV NODE_ENV test
WORKDIR /app
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --silent
COPY . /app
CMD node server.js
-- DOCKER-COMPOSE
version: "2"
services:
postgresql:
image: postgres:9.5.21
container_name: postgresql
volumes:
- '$HOME/DOCKER_VOLUME/db-data1:$HOME/DOCKER_VOLUME/var/lib/postgres'
environment:
POSTGRES_PASSWORD: <PWD>
POSTGRES_DB: <DB>
POSTGRES_USER: <UERS>
ports:
- '5432:5432'
redis:
image: redis
container_name: redis
ports:
- '6379:6379'
broker:
image: eclipse-mosquitto
container_name: mosquitto
volumes:
- $HOME/DOCKER_VOLUME/etc/mosquitto:$HOME/DOCKER_VOLUME/etc/mosquitto:ro
- $HOME/DOCKER_VOLUME/var/log/mosquitto:$HOME/DOCKER_VOLUME/var/log/mosquitto:rw
ports:
- '1883:1883'
app:
container_name: ms_user_manager
environment:
NODE_ENV: test
build: .
volumes:
- .:/app
- /app/node_modules
depends_on:
- postgresql
- redis
- broker
ports:
- '3000:3000'
I am able to access postgres, redis outside of container but nodejs app unable to connect to both & giving error
Redis - connection to 0.0.0.0:6379 failed - connect ECONNREFUSED 0.0.0.0:6379"
Postgres - Error: connect ECONNREFUSED 0.0.0.0:5432
So any idea, what I am doing wrong or what i missed.
Your Node.js app should try to connect to redis:6379. Change this in your application configuration. At the moment it looks like it is trying to connect to localhost (which works probably when you run your server locally on the host).
I am working with 3 Services:
api-knotain [main api service]
api-mongo [mongo db for api service]
api-redis [redis for api service]
The Dockerfile for api-knotain looks as follows
FROM node:latest
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "npm", "start" ]
my docker-compose file as such:
version: '3.3'
services:
api-knotain:
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
runnin
docker-compose build
docker-compose up
output:
api-knotain | connecting mongo ...: mongodb://api-mongo:27017/notify
api-knotain | Redis error: Error: Redis connection to api-redis:32770 failed - connect ECONNREFUSED 172.21.0.2:32770
api-knotain | mongo error:MongoNetworkError: failed to connect to server [api-mongo:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017]
api-knotain | Example app listening on port 7777!
neither mongo nor redis can be connected.
I tried the following things:
use localhost instead of container name
use different ports
use expose vs port
always with the same result
note:
i can connect without issue to both mongo & redis through local cli 'localhost:port'
what am I missing?
Probably redis and mongo containers start later than you application and therefore your app will not see those. To counter this you must wait for those services to be ready.
Also links is a legacy feature of Docker. You should use depends_on to control startup order and user-defined networks if you want to isolate your database and redis containers from external network.
version: '3.3'
services:
api-knotain:
depends_on:
- api-mongo
- api-redis
container_name: api-knotain
restart: always
build: ../notify.apiV2/src
ports:
- "7777:7777"
links:
- api-mongo
- api-redis
environment:
- REDIS_URI=api-redis
- REDIS_PORT=32770
- MONGO_URI=api-mongo
- MONGO_PORT=27017
- RESEED=true
- NODE_TLS_REJECT_UNAUTHORIZED=0
api-mongo:
container_name: api-mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
api-redis:
container_name: api-redis
image: "redis:alpine"
ports:
- "32770:32770"
It looks like depend_on did not properly work in 3.3 version of docker compose. after updating the version to 3.7 all works perfectly without any changes to the compose file.