econnrefused rabbitMQ between docker containers - node.js

I am trying to set up a simple docker-compose file which includes a rabbitmq container and a simple express server which connects to rabbitmq. I'm getting the following error when trying to connect to rabbitmq from my express application:
Error: connect ECONNREFUSED 172.19.0.2:5672
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '172.19.0.2',
port: 5672
}
I checked the IP-adress of the docker network manually to verify that 172.19.0.2 is indeed the rabbitmq process, which it is.
Here is my docker-compose:
version: '3'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=pass
ports:
- 5672:5672
- 15672:15672
producerexpress:
build: ./service1
container_name: producerexpress
ports:
- 3000:3000
environment:
- PORT=3000
depends_on:
- rabbitmq
and the express app and its docker file:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
const amqp = require('amqplib');
const amqpUrl = process.env.AMQP_URL || 'amqp://admin:pass#172.19.0.2:5672';
let channel;
let connection;
connect();
async function connect(){
try{
connection = await amqp.connect(amqpUrl);
channel = await connection.createChannel();
await channel.assertQueue('chatExchange', {durable: false});
} catch (err) {
console.log(err);
}
}
function sendRabbitMessage(msg) {
channel.sendToQueue('chatExchange', Buffer.from(msg));
}
app.get('/', (req, res) => {
let msg = 'Triggered by get request';
sendRabbitMessage(msg);
res.send('Sent rabbitmq message!');
});
app.listen(port, () => {
console.log(`Server started on port ${port}`);
} );
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT=3000
ENV AMQP_URL=amqp://admin:pass#172.19.0.2:5672
EXPOSE 3000
CMD ["npm", "start"]
This is my first time using docker compose and all the fixes I found on here seem to suggest I did everything correctly. What am I missing?

TL;DR
The depends_on guarantes the order in which the services will start up, but that doesn't guarante anything for the processes they inititate.
In these cases, you should expand the depends_on statement in order to take into account the health status of the process of interest
Firstly, you should avoid making the communication of cointainers depend on their IP address but instead rely on their service names, since you are using docker compose.
Meaning, instead of amqp://admin:pass#172.19.0.2:5672
You should use amqp://admin:pass#rabbitmq:5672
Moving on to the core issue, your producerexpress relies on to rabbitmq in order to function.
As a result, you added the depends_on statement to producerexpress to resolve this. But this is not enough, quotting from https://docs.docker.com/compose/startup-order/
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, ....
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running.
As a result, you need to add a health check in order to guarantee that the rabbitmq process has started successfully, not just the container.
In order to achieve that you should alter your compose file
version: '3'
services:
rabbitmq:
build: ./rabbitmq
container_name: 'rabbitmq'
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=pass
ports:
- 5672:5672
- 15672:15672
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
producerexpress:
build: ./service1
restart: on-failure
container_name: producerexpress
ports:
- 3000:3000
environment:
- PORT=3000
depends_on:
rabbitmq:
condition: service_healthy
In order to make the healthcheck, we need the curl package in the rabbitmq image, so add the following Dockerfile
FROM rabbitmq:3-management-alpine
RUN apk update
RUN apk add curl
EXPOSE 5672 15672
Finally, to make this change compatible create the following directory structure
./docker-compose.yml
./rabbitmq/
--- Dockerfile
./service1/
--- Dockerfile

Related

Redis error "EHOSTUNREACH" when trying to connect from Node JS in docker compose

I am trying to connect to redis from node js in docker, but I am facing an error. Redis container starts successfully and I can ping with redis-cli. I think the problem might be somewhere else. The below code and configuration works on ubuntu 20.04, but shows error on arch linux (Docker version 20.10.17):
Redis Client Error Error: connect EHOSTUNREACH 172.18.0.2:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1237:16) {
errno: -113,
code: 'EHOSTUNREACH',
syscall: 'connect',
address: '172.18.0.2',
port: 6379
}
Here is my docker compose file:
version: "3.7"
services:
app:
image: node:18.4-alpine
command: sh -c "yarn run dev"
ports:
- "4000:4000"
working_dir: /app
volumes:
- ./:/app
env_file:
- ./.env
redis:
image: redislabs/rejson:latest
ports:
- "6379:6379"
Here, I am trying to connect to redis from nodejs (node-redis 4.1.1):
const socket = {
host: "redis",
port: 6379
}
export const redisClient = createClient({
socket,
database: 0,
legacyMode: true
})
redisClient.on("error", (err) => {
console.log("Redis Client Error", err)
});
This is probably a race condition. Can you add depends_on: redis to your app service? This will at least make sure that the redis service is Running before instantiating the application.
Also, it is good practice to add retry/wait loop in your application with a timeout for external connectivity for better resilience to such race condition problems. They are very common.

Docker - Redis connect ECONNREFUSED 127.0.0.1:6379

I know this is a common error, but I literally spent the entire day trying to get past this error, trying everything I could find online. But I cant find anything that works for me.
I am very new to Docker and using it for my NodeJS + Express + Postgresql + Redis application.
Here is what I have for my docker-compose file:
version: "3.8"
services:
db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
cache:
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
api:
container_name: api
build:
context: .
# target: production
# image: api
depends_on:
- db
- cache
ports:
- 3000:3000
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: admin
DB_NAME: postgres
REDIS_HOST: cache
REDIS_PORT: 6379
links:
- db
- cache
volumes:
- ./:/src
volumes:
db:
driver: local
cache:
driver: local
Here is my app.js upper part:
const express = require('express')
const app = express()
const cors = require('cors')
const redis = require('redis')
const client = redis.createClient({
host: 'cache',
port: 6379,
legacyMode: true // Also tried without this line, same behavior
})
client.connect()
client.on('connect', () => {
log('Redis connected')
})
app.use(cors())
app.use(express.json())
And my Dockerfile:
FROM node:16.15-alpine3.14
WORKDIR ./
COPY package.json ./
RUN npm install
COPY ./ ./
EXPOSE 3000 6379
CMD [ "npm", "run", "serve" ]
npm run serve is nodemon ./app.js.
I also already tried to prune the system and network.
What am I missing? Help!
There are two things to put in mind here,
First of All Docker Network:
Containers are exposed to your localhost system, so as a "Server" you can access each of them directly through the browser or the command-line, But
Taken for granted that you only can access the containers because they are exposed to a default network that is accessible by the root of the system - the docker user which you can inspect by the way.
The deployed containers are not exposed to each other by default, so you need to define a virtual network and expose them to it so they can talk to each other through the ports or the host name -- which will be the container_name
So you need to do two things:
Add a container name to the redis, in the compose file just like you did on the API
Create a network and bind all the services to it, one way of doing that will be:
version: "3.8"
Network:
my-network:
name: my-network
services:
....
cache:
container_name: cache
image: redis:6.2-alpine
restart: always
ports:
- "6379:6379"
command: redis-server --save 20 1 --loglevel warning
volumes:
- cache:/data
networks: # add it in all containers that communicate together
- my-network
Then and only then you can call redis container name as the host, since docker network will create a host name for the service by the container name,
When deploying the whole compose file later, the containers will be created and all will be joined to the network by default on startup and that will allow you API app to communicate with Redis container via the docker container name as the host name
Refer to these resources for more details:
Networking on Docker Compose
Docker Network Overview
A side Unrelated note:
I personally used redis from npm for some testing projects, but I found that ioredis was much better with TypeScript projects and more expected in its behavior
To Avoid any problems with Redis, make sure to create a password and use it to connect, sometimes redis randomly considers the client as a ReadOnly client and fails to find a read replica, adding the password solved it for me

Problems in connection between Node.js and MongoDB container using environment var. in Docker Compose

I am creating a simple backend service. I am trying to connect my Node.js container with MongoDB using env file.
This is my docker-compose file
version: '3.7'
services:
web-admin:
build:
context: .
dockerfile: dockerfile
image: node:14.16.0-alpine3.10
container_name: admin-service
restart: unless-stopped
env_file: .env
environment:
- MONGO_HOSTNAME=productDB
- MONGO_PORT=27017
- MONGO_DB=product
volumes:
- /home/dd/experiment/adminservice:/app
- /home/dd/experiment/adminservice/node_modules:/app/node_modules
depends_on:
- productDB
networks:
- admin-network
productDB:
image: mongo
ports:
- "27017:27017"
container_name: productDB
restart: unless-stopped
volumes:
- /productdata:/data/db
networks:
- admin-network
volumes:
productdata:
node_modules:
networks:
admin-network:
driver: bridge
and this is my .env file
MONGO_DB=product
MONGO_PORT=27017
and this is the code-block my index.js file.
also retry logic is working perfectly no error in it.
const {
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env
//db retry logic
retries = 10;
while(retries)
{
try{
mongoose.connect('mongodb://${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}',
{
useNewUrlParser: true ,
keepAlive : true ,
}).then( function()
{
console.log('mongoDB is connected')
})
break;
}
catch(err)
{
console.log(err)
retries -= 1;
console.log('retries left: ${retries}');
}
}
now when I try to up the docker-compose using docker-compose up -d command my MongoDB container sets up nicely but the Node.js container is restarting every back and forth second. So I know that there is some error which is stopping the container but I'm not able to trace the mistake.
Please help me.
This is not how you use env vars with nodejs. You need to do
const MONGO_HOSTNAME = process.env.MONGO_HOSTNAME first.
This is not how you do retry logic with promises. Also, with promises you need to catch errors with .error() , not with try/catch
"docker logs [container id]" will help you see the error, so you might figure out whats wrong.
MONGO_HOSTNAME - try giving it as host.docker.internal instead of localhost if you using windows host machine.

Redis works on deployment , It doesn't work on production with docker and heroku

I am trying to develop an express api.It works on local machine as expected. I am using docker but on production with docker and heroku redis is not working
Dockerfile
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
docker.compose.yml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
redis:
container_name: redis
image: redis
app:
container_name: password-manager-docker
image: app
restart: always
build: .
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
REDIS_URL: ${REDIS_URL}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
REDIS_HOST: ${REDIS_HOST}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
SMTP_HOST: ${SMTP_HOST}
SMTP_PORT: ${SMTP_PORT}
SMTP_USER: ${SMTP_USER}
SMTP_PASS: ${SMTP_PASS}
redis file
const asyncRedis = require('async-redis');
//process.env.REDIS_HOST's value is redis
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
redisClient.on("connect",() => {
console.log(`Redis: ${host}:${port}`);
})
redisClient.on('error', function(err) {
// eslint-disable-next-line no-console
console.log(`[Redis] Error ${err}`);
});
The error on heroku is "[Redis] Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379". It worked without docker on heroku but not It is not working. Thanks for your help
I finally figured it out.I changed redis addon in heroku from heroku redis to redis to go and changed this line
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
to
const redisClient = asyncRedis.createClient(process.env.REDISTOGO_URL);
Install redis
sudo apt-get install redis-server
Run command to check if everything is fine
sudo service redis-server status
And if you get the message redis-server is running then it'll resove
The environment variable REDIS_HOST isn't set - you have the answer already in your question, error is: ECONNREFUSED 127.0.0.1:6379
The hostaddress is comming from your if statement: {port:6379,host:process.env.REDIS_HOST || "127.0.0.1"}
If process.env.REDIS_HOST is not set, then use 127.0.0.1 as hostaddress. You have to parse a config file or set it via -e KEY=VAL when you run docker-compose or just edit the line REDIS_HOST: ${REDIS_HOST} and replace ${REDIS_HOST} with the service name (redis) from the docker-compose.yml file or..., there are a lot of possibilities. If you want to know the real ip of the redis container, just check with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID/NAME>

Why is my docker node container exiting

I'm trying to run a node container with docker-compose -
services:
node:
build:
context: nodejs/
ports:
- "3000:3000"
volumes:
- ../nodejs:/usr/src/app
working_dir: '/usr/src/app'
My docker file
FROM node:6.10
EXPOSE 3000
The problem is it exits immediately -
$ docker-compose up
Starting docker_node_1
Attaching to docker_node_1
docker_node_1 exited with code 0
And there's nothing in the logs - docker logs docker_node_1 returns nothing.
There's a package.json referencing the main script -
{
...
"main": "server.js",
...
}
And my main script is just a simple express server -
const express = require('express');
const app = express();
const port = 3000;
app.listen(port, (err) => {
if (err) {
return console.log('something bad happened', err);
}
console.log(`server is listening on ${port}`);
});
I guess I'm missing something obvious but I can't see what it is...
It's missing specifying the docker command. That is the key concept that represents the container: sort of isolated process (process = the command, your program)
You can do it in Dockerfile:
CMD npm start
Or in docker-compose.yml:
services:
node:
command: npm start
build:
context: nodejs/
ports:
- "3000:3000"
volumes:
- ../nodejs:/usr/src/app
working_dir: '/usr/src/app'
Both approaches are equivalent. But edit it as your needs (npm run, npm run build, etc)

Resources