Node can't reach postgres server in docker compose - node.js

I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.

By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.

Related

Unable to connect to the Postgres database in Docker Compose

I am building a Node JS Web application. I am using Postgres SQL and Sequelize for the database.
I have the docker-compose.yaml with the following content.
version: '3.8'
services:
web:
container_name: web-server
build: .
ports:
- 4000:4000
restart: always
depends_on:
- postgres
postgres:
container_name: app-db
image: postgres:11-alpine
ports:
- '5431:5431'
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=babe_manager
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
dockerfile: ./pgconfig/Dockerfile
volumes:
app-db-data:
I am using the following credentials to connect to the database.
DB_DIALECT=postgres
DB_HOST=postgres
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
When the application tries to connect to the database, I am getting the following error.
getaddrinfo ENOTFOUND postgres
I tried using this too.
DB_DIALECT=postgres
DB_HOST=localhost
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
This time I am getting the following error.
connect ECONNREFUSED 127.0.0.1:5431
My app server is not within the docker-compose.yaml. I just start it using "nodemon server.js". How can I fix it?
You need to make sure your Postgres container and your Node are on the same network. After they reside in the common network you can then use docker's DNS to resolve the database from the Node application in your case eighter postgres or test-db for hostname should work. More on container communication over docker networks.

Can't access MongoDB container from NodeJS App

I'm running an instance of a web application in my Docker container and am also running a MongoDB container so when I launch the web app I can easily connect to the DB on the app's connection page.
The issue is that I'm not sure how to reach the Mongo container from my web app and am not sure if my host/port connection info is correct.
My Docker Setup
As you can see the container is up and running with both mongo and web app services running without errors
I build the two through docker-compose.yml
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
volumes:
grafana-mongo-db: {}
grafana-storage: {}
Issue
With everything up and running I'm attempting to connect through the web app, but I seem to be using the wrong connection info...
I assumed to use "hostMachine:port" (roxane:27018), but it's not connecting. Is there something I overlooked here?
There were two changes I had to make to fix this issue:
Modify the bind_ip in mongod.conf via making this change to my docker-compose file
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
command: mongod --bind_ip 0.0.0.0
I needed to refer to the IP address instead of the hostname in the cli in my we application. (Thanks to this answer for help with this one)
Short answer
db service is in the same network than web service not in host network.
As you named your services via container_name you shoud be able to use the connection string mongodb://mongo:27017
Explanation
By default, docker containers run under a bridge network allowing them to communicate without viewing your host network.
When using ports in a compose file, you define that you want to map an internal port of the container to the host port
"27018:27017" => I want to expose the container port number 27017 to the host port number 27018.
As a result, you could expose your web frontend without exposing your mongo service :
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
volumes:
grafana-mongo-db: {}
grafana-storage: {}

Connecting to a neo4 driver (in a Docker container) from another Docker container

Normally I'd have an instance neo4j running in Docker, then in a script I access the driver like so:
self.driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("username", "password"))
I'm now putting this script itself into a Docker container, but I now get the error message:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address)
What uri (or other parameter) needs changing to access a neo4 Docker instance, from another docker container?
Within my docker-compose.yml, I have:
version: '3'
services:
neo4j:
container_name: neo4j
image: neo4j:3.5
restart: always
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=neo4j/start
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
appgui:
container_name: appgui
image: python:3.7.3-slim
build:
context: ./APPGUI/
volumes:
- ./APPGUI/:/usr/src/app/
restart: always
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 80:80
depends_on:
- neo4j
I also can't access my web app (http://localhost:5000)
Your service can't connect to localhost Neo4j, because it is inside a docker container, and localhost points to the docker containers instead of your local machine.
In this case, it is best to run both containers with docker-compose. You want to set the depends on feature in the other docker container. Here is an example docker-compose.yml file from my project.
version: '3.7'
services:
neo4j:
image: neo4j:4.1.2
restart: always
hostname: neo4jngs
container_name: neo4jngs
ports:
- 7474:7474
- 7687:7687
api:
build:
context: ./API
hostname: api
restart: always
container_name: api
ports:
- 3000:3000
depends_on:
- neo4j
As you can see, the api container is a service that will connect to Neo4j. Now you can change the driver settings to:
self.driver = GraphDatabase.driver(uri="bolt://neo4j:7687", auth=("username", "password"))
And you are good to go.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
However Tomaž Bratanič provided me with some helpful tips to get a resolve.

Redis docker-compose with Nodejs - connect ENOENT

I am trying to containerize my whole developer workflow using docker-compose. Problem I am facing is with redis contianer. My worker container is unable to connect to redis. I trie different solution from stackoverflow like:
Setting up node with redis using docker-compose
Access redis locally on docker - docker compose
nodejs, redis, npm and docker-compose setup
Docker-compose , anyway to specify a redis.conf file?
How to connect to a Redis container using Docker Compose?
And many others also from github, tried different examples but no luck.
Here is my docker-compose file:
version: "3.6"
services:
redis_db:
# image: bitnami/redis:latest # tried different image also
image: redis
container_name: rme_redis
network_mode: host # tried networks also
# command: ['redis-server', '/redis.conf']
# volumes:
# - ./docker/redis.conf:/redis.conf
# hostname: redis_db
# networks:
# - rme
# environment:
# - ALLOW_EMPTY_PASSWORD=yes
# - REDIS_PASSWORD="redis"
ports:
- 6379:6379
# networks:
# node_net:
# ipv4_address: 172.28.1.4
worker:
build: worker
image: worker
container_name: rme_worker
command: node worker.js
# networks:
# - rme
# network_mode: host
environment:
- REDIS_URL="redis://redis_db:6379" # localhost when netowk_mode is host
volumes:
- ./worker:/app
# links:
# - redis_db
depends_on:
- redis_db
Error I am getting is:
Error: Redis connection to "redis://redis_db:6379" failed - connect ENOENT "redis://redis_db:6379"
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16) {
errno: -2,
code: 'ENOENT',
syscall: 'connect',
address: '"redis://redis_db:6379"'
}
Operating system: macOS
Docker: Docker Desktop
Edit: Found a solution using host and port in js code
const redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT)
})
# docker-compose.yml
version: "3.6"
services:
redis_db:
image: redis
# only needed to directly access Redis from host
ports:
- 6379:6379
worker:
build: worker
environment:
- REDIS_HOST=redis_db
- REDIS_PORT=6379
depends_on:
- redis_db
If you find solution using redis connection string do share.
Thanks for your help
Setting network_mode: host disables Docker networking for a specific container. It's only necessary in some unusual situations where a container doesn't listen on a fixed port number or where you're trying to use a container to manage the host's network setup.
In your case, since the Redis database is disabling Docker networking, the application container can't reach it by name. Removing all of the network_mode: host options should address this.
Networking in Compose describes the overall network setup. Of note, Compose will create a default network for you, and containers will be reachable using their Compose service names. You shouldn't normally need to specify custom networks: options, explicitly provide a container_name:, set a hostname:, provide archaic links:, or set network_mode:.
You should be able to successfully trim the docker-compose.yml file you show down to:
version: "3.6"
services:
redis_db:
image: redis
# only needed to directly access Redis from host
ports:
- 6379:6379
worker:
build: worker
environment:
- REDIS_URL="redis://redis_db:6379"
# `docker-compose up worker` also starts Redis
# Not strictly needed for networking
depends_on:
- redis_db

Connect nodejs App in one docker container to mongodb in another on Docker Swarm

I have setup a docker stack (see docker-compose.yml) below.
It has 2 Services:
- A nodejs (loopback) service running in one (or more) containers.
- A mongodb running in another container.
What HOST do I use in my node application to allow it to connect to the mongodb in another container (currently on same node, but could be on different node if I setup a swarm).
I have tried bridge, webnet, hosts IP etc.. but no luck.
Note: I am passing the host into the nodejs app with environment variable "MONGODB_SERVICE_SERVICE_HOST".
Thanks in advance!!
version: "3"
services:
web:
image: myaccount/loopback-app:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
ports:
- "8060:3000"
environment:
MONGODB_SERVICE_SERVICE_HOST: "webnet"
depends_on:
- mongo
networks:
- webnet
mongo-database:
image: mongo
ports:
- "27017:27017"
volumes:
- "/Users/jason/workspace/mongodb/db:/data/db"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
webnet is not the host. This is mongo-database.
So change webnet to mongo-database.
ENV MONGO_URL "mongodb://containerName:27017/dbName"
To check mongo-database communication, enter into the nodejs container, and try to ping mongo-database :
ping mongo-database
If it works, you know that your server can communicate with your mongo instance.

Resources