I'm trying to connect my app to redis, but i get:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379
when i do:
docker exec -it ed02b7e19810 ping test_redis_1
i've received all packets.
also the redis container declares:
* Running mode=standalone, port=6379
* Ready to accept connections
( i get the WARNINGS but i don't think its related:
Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128
this is my docker-compose.yaml:
version: '3'
services:
test-service:
build: .
volumes:
- ./:/usr/test-service/
ports:
- 5001:3000
depends_on:
- redis
redis:
image: "redis:alpine"
DockerFile
FROM node:8.11.2-alpine
WORKDIR /usr/test-service/
COPY . /usr/test-service/
RUN yarn install
EXPOSE 3000
CMD ["yarn", "run", "start"]
app.js
const Redis = require('ioredis');
const redis = new Redis();
redis.set('foo', 'bar');
redis.get('foo').then(function (result) {
console.log(result);
});
i've also tried with redis package but still can't connect:
var redis = require("redis"),
client = redis.createClient();
client.on("error", function (err) {
console.log("Error " + err);
});
getting:
Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
For that specific docker-compose.yml there is no redis on 127.0.0.1, you should use redis as host, since services on the same Docker network are able to find each other using the service names as DNS.
const Redis = require('ioredis');
const redis = new Redis({ host: 'redis' });
Furthermore, depends_on does not wait for redis container to be ready before starting, it will only launch it first, so it is your job to wait before starting app.js or just handle that inside app.js
io-redis comes with a reconnection strategy, so you may want to try that first.
You can see my answer here regarding that issue:
Wait node.js until Logstash is ready using containers
Related
I am trying to connect to redis from node js in docker, but I am facing an error. Redis container starts successfully and I can ping with redis-cli. I think the problem might be somewhere else. The below code and configuration works on ubuntu 20.04, but shows error on arch linux (Docker version 20.10.17):
Redis Client Error Error: connect EHOSTUNREACH 172.18.0.2:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1237:16) {
errno: -113,
code: 'EHOSTUNREACH',
syscall: 'connect',
address: '172.18.0.2',
port: 6379
}
Here is my docker compose file:
version: "3.7"
services:
app:
image: node:18.4-alpine
command: sh -c "yarn run dev"
ports:
- "4000:4000"
working_dir: /app
volumes:
- ./:/app
env_file:
- ./.env
redis:
image: redislabs/rejson:latest
ports:
- "6379:6379"
Here, I am trying to connect to redis from nodejs (node-redis 4.1.1):
const socket = {
host: "redis",
port: 6379
}
export const redisClient = createClient({
socket,
database: 0,
legacyMode: true
})
redisClient.on("error", (err) => {
console.log("Redis Client Error", err)
});
This is probably a race condition. Can you add depends_on: redis to your app service? This will at least make sure that the redis service is Running before instantiating the application.
Also, it is good practice to add retry/wait loop in your application with a timeout for external connectivity for better resilience to such race condition problems. They are very common.
I am trying to develop an express api.It works on local machine as expected. I am using docker but on production with docker and heroku redis is not working
Dockerfile
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD ["npm","start"]
docker.compose.yml file
version: '3'
services:
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
redis:
container_name: redis
image: redis
app:
container_name: password-manager-docker
image: app
restart: always
build: .
ports:
- '80:5000'
links:
- mongo
- redis
environment:
MONGODB_URI: ${MONGODB_URI}
REDIS_URL: ${REDIS_URL}
clientID: ${clientID}
clientSecret : ${clientSecret}
PORT: ${PORT}
REDIS_HOST: ${REDIS_HOST}
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
JWT_EXPIRE: ${JWT_EXPIRE}
REFRESH_TOKEN: ${REFRESH_TOKEN}
JWT_REFRESH_SECRET_KEY: ${JWT_REFRESH_SECRET_KEY}
JWT_REFRESH_EXPIRE: ${JWT_REFRESH_EXPIRE}
JWT_COOKIE: ${JWT_COOKIE}
SMTP_HOST: ${SMTP_HOST}
SMTP_PORT: ${SMTP_PORT}
SMTP_USER: ${SMTP_USER}
SMTP_PASS: ${SMTP_PASS}
redis file
const asyncRedis = require('async-redis');
//process.env.REDIS_HOST's value is redis
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
redisClient.on("connect",() => {
console.log(`Redis: ${host}:${port}`);
})
redisClient.on('error', function(err) {
// eslint-disable-next-line no-console
console.log(`[Redis] Error ${err}`);
});
The error on heroku is "[Redis] Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379". It worked without docker on heroku but not It is not working. Thanks for your help
I finally figured it out.I changed redis addon in heroku from heroku redis to redis to go and changed this line
const redisClient = asyncRedis.createClient({port:6379,host:process.env.REDIS_HOST || "127.0.0.1"});
to
const redisClient = asyncRedis.createClient(process.env.REDISTOGO_URL);
Install redis
sudo apt-get install redis-server
Run command to check if everything is fine
sudo service redis-server status
And if you get the message redis-server is running then it'll resove
The environment variable REDIS_HOST isn't set - you have the answer already in your question, error is: ECONNREFUSED 127.0.0.1:6379
The hostaddress is comming from your if statement: {port:6379,host:process.env.REDIS_HOST || "127.0.0.1"}
If process.env.REDIS_HOST is not set, then use 127.0.0.1 as hostaddress. You have to parse a config file or set it via -e KEY=VAL when you run docker-compose or just edit the line REDIS_HOST: ${REDIS_HOST} and replace ${REDIS_HOST} with the service name (redis) from the docker-compose.yml file or..., there are a lot of possibilities. If you want to know the real ip of the redis container, just check with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID/NAME>
I'm trying to use Docker Compose to connect a Node.js container to a Postgres container. I can run the Node server fine, and can also connect to the Postgres container fine from my local as I've mapped the ports, but I'm unable to get the Node container to connect to the database.
Here's the compose YAML file:
version: "3"
services:
api_dev:
build: ./api
command: sh -c "sleep 5; npm run dev" # I added the sleep in to wait for the DB to load but it doesn't work either way with or without this sleep part
container_name: pc-api-dev
depends_on:
- dba
links:
- dba
ports:
- 8001:3001
volumes:
- ./api:/home/app/api
- /home/app/api/node_modules
working_dir: /home/app/api
restart: on-failure
dba:
container_name: dba
image: postgres
expose:
- 5432
ports:
- '5431:5432'
env_file:
- ./api/db.env
In my Node container, I'm waiting for the Node server to spin up and attempting to connect to the database in the other container like so:
const { Client } = require('pg')
const server = app.listen(app.get('port'), async () => {
console.log('App running...');
const client = new Client({
user: 'db-user',
host: 'dba', // host is set to the service name of the DB in the compose file
database: 'db-name',
password: 'db-pass',
port: 5431,
})
try {
await client.connect()
console.log(client) // x - can't see this
client.query('SELECT NOW()', (err, res) => {
console.log(err, res) // x - can't see this
client.end()
})
console.log('test') // x - can't see this
} catch (e) {
console.log(e) // x - also can't see this
}
});
After reading up on it today in depth, I've seen the DB host in the connection code above can't be localhost as that refers to the container which is currently running, so it must be set to the service name of the container we're connecting to (dba in this case). I've also mapped the ports, and can see the DB is ready accepting connections well before my Node server starts.
However, not only can I not connect to the database from Node, I'm also unable to see any success or error console logs from the try catch. It's as if the connection is not resolving, and doesn't ever time out, but I'm not sure.
I've also seen that the "listen_addresses" needs to be updated so other containers can connect to the Postgres container, but struggling to find out how to do this and test when I can't debug the actual issue due to lack of logs.
Any direction would be appreciated, thanks.
You are setting the container name and can reference that container by it. For example,
db:
container_name: container_db
And host:port
DB_URL: container_db:5432
I'm trying to dockerize Node.js application which connects to MongoDB using mongoose. It succeeds anytime I run node index.js from the shell when the connection URL is: mongodb://localhost:27017/deposit.
If I restart my computer and then try to run the dockerized project (with mongo instead of localhost in url) with the command docker-compose up it fails to connect to MongoDB. But after I try again the same command, then it succeeds.
So my question is why node cannot connect to MongoDB on first try after the computer is restarted?
PS. Docker is running when I'm trying it
connection.js
const mongoose = require('mongoose');
const connection = "mongodb://mongo:27017/deposit";
const connectDb = () => {
mongoose.connect(connection, {useNewUrlParser: true, useUnifiedTopology: true}).then(res => console.log("Connected to DB"))
.catch(err => console.log('>> Failed to connect to MongoDB, retrying...'));
};
module.exports = connectDb;
Dockerfile
FROM node:latest
RUN mkdir -p /app
WORKDIR /app
#/usr/src/app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 7500
# ENTRYPOINT ["node"]
CMD ["node", "src/index.js"]
docker-compose.yml
version: "3"
services:
deposit:
container_name: deposit
image: test/deposit
restart: always
build: .
network_mode: host
ports:
- "7500:7500"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /data:/data/db
network_mode: host
ports:
- '27017:27017'
In your case, node application starts before mongo being ready. There are two approaches to tackle this problem: To handle it in your docker-compose or your application.
You can use wait-for-it.sh or write a wrapper script (both described here) to make sure that your node application starts after the db is ready.
But as quoted from docker documentation, it is better to handle this in your application:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason
You can implement mongo retry as below (Described in this answer):
var connectWithRetry = function() {
return mongoose.connect(mongoUrl, function(err) {
if (err) {
console.error('Failed to connect to mongo on startup - retrying in 5 sec', err);
setTimeout(connectWithRetry, 5000);
}
});
};
connectWithRetry();
As mentioned in the comment there might be the possibility the DB is not yet ready to accept the connection, so one way is to add retry logic or the other option is
serverSelectionTimeoutMS -
With useUnifiedTopology, the MongoDB driver will try to find a server
to send any given operation to, and keep retrying for
serverSelectionTimeoutMS milliseconds. If not set, the MongoDB driver
defaults to using 30000 (30 seconds).
So try with below option
const mongoose = require('mongoose');
const uri = 'mongodb://mongo:27017/deposit?retryWrites=true&w=majority';
mongoose.connect(uri, {
useNewUrlParser: true,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 50000
}).catch(err => console.log(err.reason));
But again if you init DB script getting bigger it will take more time, so you go with retry logic if that did not work. in the above script it will wait for 50 seconds.
I can't figure out how to connect to my redis service from my app service. Using DDocker version 18.03.1-ce, build 9ee9f40ocker for Mac.
I've tried connecting the various ways I've found on similar questions:
const client = redis.createClient({ host: 'localhost', port: 6379});
const client = redis.createClient({ host: 'redis', port: 6379});
const client = redis.createClient('redis://redis:6379');
const client = redis.createClient('redis', 6379); // and reversed args
I always get some form of:
Error: Redis connection to localhost:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
Error: Redis connection to redis:6379 failed - connect ECONNREFUSED 172.20.0.2:6379
Docker containers
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fd798d58561 app_app "pm2-runtime start e…" 2 seconds ago Up 7 seconds app
65d148e498f7 app_redis "docker-entrypoint.s…" About a minute ago Up 8 seconds 0.0.0.0:6379->6379/tcp redis
Redis works:
$ docker exec -it redis /bin/bash
root#65d148e498f7:/data# redis-cli ping
PONG
Redis Dockerfile (pretty simple)
FROM redis:4.0.9
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]
app Dockerfile
FROM node:10.3.0-slim
RUN mkdir -p /app
COPY src/* /app/
CMD ["pm2-runtime", "start", "/app/ecosystem.config.js"]
docker-compose.yml
version: "3"
services:
redis:
build: ./redis/
container_name: redis
restart: unless-stopped
ports:
- "6379:6379"
expose:
- "6379"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- 'API_PORT=6379'
- 'NODE_ENV=production'
app:
depends_on:
- redis
build: ./app/
container_name: app
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /app/node_modules
environment:
- 'NODE_ENV=production'
It looks like your redis image is configured to listen on 127.0.0.1 rather than all interfaces. This is not an issue with the default redis images, so either use the official image from docker hub, or correct your configuration to listen on 0.0.0.0.
You'll be able to verify this with netshoot:
docker run --rm --net container:app_redis nicolaka/netshoot netstat -ltn
In the redis conf, listening on all interface is done by commenting out the "bind" line in redis.conf.
Let me explain it in simple language. When you run docker-compose up it runs redis and app in separate containers. Now your app needs to connect/access the redis container (remember redis is not at your machines localhost, its inside a container and runs inside it at default port 6379). By default Docker will keep app container and redis container in same network and you can access a container by its service name (which in your case is redis and app) so in order to access redis from app container all you need is to use the default port 6379 and host will be the service name (in your case "redis").
For a node application running in a container get access to Redis (which was also running in a container) by
const redis = require("redis");
const client = redis.createClient(6379, "service-name-for-redis-container");
I solve this problem changing the redis host from 'localhost' to 'redis', exemple:
REDIS_HOST=redis
REDIS_PORT=6379
After the change my docker service started to comunicate with redis.
Original forum answer: https://forums.docker.com/t/connecting-redis-from-my-network-in-docker-net-core-application/92405
In my case the problem was that I was binding a different port on redis:
redis:
image: redis
ports:
- 49155:6379
And I was trying to connect to port 49155 but I needed to connect through port 6379 since the connection is from another service.
localhost from the app container's perspective won't be able to leave the app container. So the best bet is to use redis or the host's ip address.
If you want to reach redis from the app container, you'll need to link them or put them into the same network. Please add a network property to both services, using the same network name. Docker will then provide you with with valid dns lookups for the service names.
See the official docs at https://docs.docker.com/compose/compose-file/#networks (for the service: property) and https://docs.docker.com/compose/compose-file/#network-configuration-reference (for the top-level networks property).