Redis docker-compose with Nodejs - connect ENOENT - node.js

I am trying to containerize my whole developer workflow using docker-compose. Problem I am facing is with redis contianer. My worker container is unable to connect to redis. I trie different solution from stackoverflow like:
Setting up node with redis using docker-compose
Access redis locally on docker - docker compose
nodejs, redis, npm and docker-compose setup
Docker-compose , anyway to specify a redis.conf file?
How to connect to a Redis container using Docker Compose?
And many others also from github, tried different examples but no luck.
Here is my docker-compose file:
version: "3.6"
services:
redis_db:
# image: bitnami/redis:latest # tried different image also
image: redis
container_name: rme_redis
network_mode: host # tried networks also
# command: ['redis-server', '/redis.conf']
# volumes:
# - ./docker/redis.conf:/redis.conf
# hostname: redis_db
# networks:
# - rme
# environment:
# - ALLOW_EMPTY_PASSWORD=yes
# - REDIS_PASSWORD="redis"
ports:
- 6379:6379
# networks:
# node_net:
# ipv4_address: 172.28.1.4
worker:
build: worker
image: worker
container_name: rme_worker
command: node worker.js
# networks:
# - rme
# network_mode: host
environment:
- REDIS_URL="redis://redis_db:6379" # localhost when netowk_mode is host
volumes:
- ./worker:/app
# links:
# - redis_db
depends_on:
- redis_db
Error I am getting is:
Error: Redis connection to "redis://redis_db:6379" failed - connect ENOENT "redis://redis_db:6379"
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16) {
errno: -2,
code: 'ENOENT',
syscall: 'connect',
address: '"redis://redis_db:6379"'
}
Operating system: macOS
Docker: Docker Desktop
Edit: Found a solution using host and port in js code
const redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT)
})
# docker-compose.yml
version: "3.6"
services:
redis_db:
image: redis
# only needed to directly access Redis from host
ports:
- 6379:6379
worker:
build: worker
environment:
- REDIS_HOST=redis_db
- REDIS_PORT=6379
depends_on:
- redis_db
If you find solution using redis connection string do share.
Thanks for your help

Setting network_mode: host disables Docker networking for a specific container. It's only necessary in some unusual situations where a container doesn't listen on a fixed port number or where you're trying to use a container to manage the host's network setup.
In your case, since the Redis database is disabling Docker networking, the application container can't reach it by name. Removing all of the network_mode: host options should address this.
Networking in Compose describes the overall network setup. Of note, Compose will create a default network for you, and containers will be reachable using their Compose service names. You shouldn't normally need to specify custom networks: options, explicitly provide a container_name:, set a hostname:, provide archaic links:, or set network_mode:.
You should be able to successfully trim the docker-compose.yml file you show down to:
version: "3.6"
services:
redis_db:
image: redis
# only needed to directly access Redis from host
ports:
- 6379:6379
worker:
build: worker
environment:
- REDIS_URL="redis://redis_db:6379"
# `docker-compose up worker` also starts Redis
# Not strictly needed for networking
depends_on:
- redis_db

Related

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

Can't access MongoDB container from NodeJS App

I'm running an instance of a web application in my Docker container and am also running a MongoDB container so when I launch the web app I can easily connect to the DB on the app's connection page.
The issue is that I'm not sure how to reach the Mongo container from my web app and am not sure if my host/port connection info is correct.
My Docker Setup
As you can see the container is up and running with both mongo and web app services running without errors
I build the two through docker-compose.yml
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
volumes:
grafana-mongo-db: {}
grafana-storage: {}
Issue
With everything up and running I'm attempting to connect through the web app, but I seem to be using the wrong connection info...
I assumed to use "hostMachine:port" (roxane:27018), but it's not connecting. Is there something I overlooked here?
There were two changes I had to make to fix this issue:
Modify the bind_ip in mongod.conf via making this change to my docker-compose file
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
command: mongod --bind_ip 0.0.0.0
I needed to refer to the IP address instead of the hostname in the cli in my we application. (Thanks to this answer for help with this one)
Short answer
db service is in the same network than web service not in host network.
As you named your services via container_name you shoud be able to use the connection string mongodb://mongo:27017
Explanation
By default, docker containers run under a bridge network allowing them to communicate without viewing your host network.
When using ports in a compose file, you define that you want to map an internal port of the container to the host port
"27018:27017" => I want to expose the container port number 27017 to the host port number 27018.
As a result, you could expose your web frontend without exposing your mongo service :
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
volumes:
grafana-mongo-db: {}
grafana-storage: {}

Node can't reach postgres server in docker compose

I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.
By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.

Docker Compose - Redis container refusing connection to Node container

No matter what I try I can't seem to get my node app to connect to redis between containers within the same docker-compose yml config. I've seen a lot of similar questions but none of the answers seem to work.
I'm using official images in both cases, not building my own
I am putting "redis" as my host and setting it as hostname in my docker compose YML config
const client = redis.createClient({ host: "redis" });
in my redis.conf I am using bind 0.0.0.0
This what the console is printing out:
Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
This is my docker-compose.yml
version: '3'
services:
server:
image: "node:10-alpine"
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- db
db:
image: "redis:5.0-alpine"
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
hostname: redis
volumes:
redis-data:
UPDATE
Here's my redis.conf, it's not much.
bind 0.0.0.0
appendonly yes
appendfilename "my_app.aof"
appendfsync always
UPDATE 2
Things I've noticed and tried
in my original setup, when I run docker inspect I can see they are both joined to the same network. when I exec.../bin/bash into the redis container I can successfully ping the server container but when I'm in the server container it can not ping the redis one.
network_mode: bridge -adding that to both containers does not work
I did get one baby step closer by trying out this:
server:
network_mode: host
redis:
network_mode: service:host
I'm on a Mac and in order to get host mode to work you need to do that. It does work in the sense that my server successfully connects to the redis container. However, hitting localhost:3000 does not work even though I'm forwarding the ports
version: '3'
services:
server:
image: "node:10-alpine"
#network_mode: bridge
#links is necessary if you use network_mode: bridge
#links: [redis]
networks:
- default
working_dir: /usr/src/app
user: "node"
command: "npm start"
volumes:
- .:/usr/src/app
ports:
- '3000:3000'
- '3001:3001'
- '9229:9229' # Node debugging port
environment:
- IS_DOCKER=1
- NODE_ENV=DEVELOPMENT
depends_on:
- redis
redis:
image: "redis:5.0-alpine"
#container_name: redis
#network_mode: bridge
networks:
- default
expose:
- '6379'
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command:
- redis-server
- /usr/local/etc/redis/redis.conf
volumes:
redis-data:
networks:
default:
Rename the container to the hostname you want to use: redis in your case instead of db. To make it accessible over the docker network you will have to put them on the same network like above or use network_mode: bridge and links: [redis] instead.
Try this to test your network:
docker ps to get the current container id or running name from the server container
docker exec -it id/name /bin/sh
Now you have a shell inside server and should be able to resolve redis via:
ping redis or nc -zv redis 6379
For those who still getting this error
i found that in new versions of redis - 4 and up
you need to configure the client like this:
const client = createClient({
socket: {
host: host,
port: process.env.REDIS_PORT,
},
});
it solved my problem
then in docker compose file you don't need to specify ports
version: "3.4"
services:
redis-server:
image: "redis:latest"
restart: always
api:
depends_on:
- redis-server
restart: always
build: .
ports:
- "5000:5000"

Cant connect to postgres database inside docker container

My problem is I have a script that should scrap data and put it inside postgres database, however it has a problem to reach out postgres container.
When I run my docker-compose here is the result:
Name Command State Ports
------------------------------------------------------------------------------------------
orcsearch_dev-db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
orcsearch_flask_1 gunicorn wsgi:application ... Up 0.0.0.0:80->80/tcp, 8000/tcp
We can clearly see that postgres is on 5432 port.
This is my python script database setting:(ofcourse I removed password for obvious reason)
class App():
settings = {
'db_host': 'db',
'db_user': 'postgres',
'db_pass': '',
'db_db': 'orc',
}
db = None
proxies = None
and this is my docker-compose.yml
version: '2'
services:
flask:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
- pip-cache:/root/.cache
ports:
- "80:80"
links:
- "dev-db:db"
environment:
- DATABASE_URL=postgresql://postgres#db:5432/postgres
stdin_open: true
command: gunicorn wsgi:application -w 1 --bind 0.0.0.0:80 --log-level debug --reload
networks:
app:
aliases:
- flask
dev-db:
image: postgres:9.5
ports:
- "5432:5432"
networks:
app:
aliases:
- dev-db
volumes:
pip-cache:
driver: local
networks:
app:
When going into exec flask bash(inside flask container) and running script command I get this error:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Obviously there is postgres running on this port and I cant figure out what wrong do I do. Any help would be nice!
Probably you are using DSN instead of URI, and PostgreSQL thinks that "db" is not a host because it's hard to tell if "db" is host or path to socket. To fix it, use URI instead of DSN if you use >=9.2 version of PostgreSQL.
Example of URI:
postgresql://[user[:password]#][netloc][:port][/dbname][?param1=value1&...]
https://www.postgresql.org/docs/9.2/static/libpq-connect.html#LIBPQ-CONNSTRING
In your App class it should be 'db_host': 'dev-db', Seems like that hostname is exposed, not db.
I think that the problem is related to the fact that you're using the network and the link together. Try remove the link and change the postgres address to dev-db or change the alias to:
networks:
app:
aliases:
- dev-db
- db

Resources