ioredis connection keeps resetting when connecting to local redis cluster from docker container - node.js

I have a docker compose containerized client/server node app that is failing to create a stable connection to a redis cluster I have running on my local environment. The redis cluster has 6 nodes (3 master, 3 replica configuration) running on my local machine. Every time I start my app and attempt to connect to redis, the connect event is spammed and I get the following error on my client:
Proxy error: Could not proxy request /check-login from localhost:3000 to http://server.
See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNRESET)
I have made sure to configure the redis cluster to have the settings protected-mode to no and bind to 0.0.0.0 to allow remote access to the cluster. I have confirmed that I can access the cluster locally by pinging one of the cluster nodes redis-cli -h 127.0.0.1 -p 30001:
127.0.0.1:30001> ping
PONG
127.0.0.1:30001> exit
I am setting my REDIS_HOSTS environment to be "host.docker.internal:30001,host.docker.internal:30005,host.docker.internal:30006". This should allow me to connect to my redis cluster running at 127.0.0.1 on my host machine. My node application code:
const Redis = require("ioredis");
const hosts = process.env.REDIS_HOSTS.split(",").map((connection) => {
const [host, port] = connection.split(":");
return {
host,
port,
};
});
const client = new Redis.Cluster(hosts, {
enableAutoPipelining: true,
slotsRefreshTimeout: 100000,
});
client.on("error", (error) => {
console.log("redis connection failed: ", error);
});
client.on("connect", () => {
console.log("redis connection established");
});
module.exports = client;
My ioredis logs:
2022-02-11T23:52:09.970Z ioredis:cluster status: [empty] -> connecting
info: Listening on port 80
2022-02-11T23:52:15.449Z ioredis:cluster resolved hostname host.docker.internal to IP 192.168.65.2
2022-02-11T23:52:15.476Z ioredis:cluster:connectionPool Reset with [
{ host: '192.168.65.2', port: 30001 },
{ host: '192.168.65.2', port: 30002 },
{ host: '192.168.65.2', port: 30003 }
]
2022-02-11T23:52:15.482Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30001 as master
2022-02-11T23:52:15.504Z ioredis:redis status[192.168.65.2:30001]: [empty] -> wait
2022-02-11T23:52:15.511Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30002 as master
2022-02-11T23:52:15.517Z ioredis:redis status[192.168.65.2:30002]: [empty] -> wait
2022-02-11T23:52:15.519Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30003 as master
2022-02-11T23:52:15.521Z ioredis:redis status[192.168.65.2:30003]: [empty] -> wait
2022-02-11T23:52:15.530Z ioredis:cluster getting slot cache from 192.168.65.2:30002
2022-02-11T23:52:15.541Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: [empty] -> wait
2022-02-11T23:52:15.590Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: wait -> connecting
2022-02-11T23:52:15.603Z ioredis:redis queue command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.614Z ioredis:cluster:subscriber selected a subscriber 192.168.65.2:30001
2022-02-11T23:52:15.621Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: [empty] -> wait
2022-02-11T23:52:15.622Z ioredis:cluster:subscriber started
2022-02-11T23:52:15.734Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: connecting -> connect
2022-02-11T23:52:15.737Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: connect -> ready
2022-02-11T23:52:15.739Z ioredis:connection set the connection name [ioredis-cluster(refresher)]
2022-02-11T23:52:15.742Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> client([ 'setname', 'ioredis-cluster(refresher)' ])
2022-02-11T23:52:15.749Z ioredis:connection send 1 commands in offline queue
2022-02-11T23:52:15.750Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.781Z ioredis:cluster cluster slots result count: 3
2022-02-11T23:52:15.783Z ioredis:cluster cluster slots result [0]: slots 0~5460 served by [ '127.0.0.1:30001', '127.0.0.1:30004' ]
2022-02-11T23:52:15.788Z ioredis:cluster cluster slots result [1]: slots 5461~10922 served by [ '127.0.0.1:30002', '127.0.0.1:30005' ]
2022-02-11T23:52:15.792Z ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '127.0.0.1:30003', '127.0.0.1:30006' ]
2022-02-11T23:52:15.849Z ioredis:cluster:connectionPool Reset with [
{ host: '127.0.0.1', port: 30001, readOnly: false },
{ host: '127.0.0.1', port: 30004, readOnly: true },
{ host: '127.0.0.1', port: 30002, readOnly: false },
{ host: '127.0.0.1', port: 30005, readOnly: true },
{ host: '127.0.0.1', port: 30003, readOnly: false },
{ host: '127.0.0.1', port: 30006, readOnly: true }
]
2022-02-11T23:52:15.850Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30001 because the node does not hold any slot
2022-02-11T23:52:15.851Z ioredis:redis status[192.168.65.2:30001]: wait -> close
2022-02-11T23:52:15.851Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.852Z ioredis:redis status[192.168.65.2:30001]: close -> end
2022-02-11T23:52:15.857Z ioredis:cluster:connectionPool Remove 192.168.65.2:30001 from the pool
2022-02-11T23:52:15.858Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30002 because the node does not hold any slot
2022-02-11T23:52:15.858Z ioredis:redis status[192.168.65.2:30002]: wait -> close
2022-02-11T23:52:15.859Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.859Z ioredis:redis status[192.168.65.2:30002]: close -> end
2022-02-11T23:52:15.861Z ioredis:cluster:connectionPool Remove 192.168.65.2:30002 from the pool
2022-02-11T23:52:15.861Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30003 because the node does not hold any slot
2022-02-11T23:52:15.861Z ioredis:redis status[192.168.65.2:30003]: wait -> close
2022-02-11T23:52:15.865Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.866Z ioredis:redis status[192.168.65.2:30003]: close -> end
2022-02-11T23:52:15.866Z ioredis:cluster:connectionPool Remove 192.168.65.2:30003 from the pool
2022-02-11T23:52:15.867Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30001 as master
2022-02-11T23:52:15.869Z ioredis:redis status[127.0.0.1:30001]: [empty] -> wait
2022-02-11T23:52:15.871Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30004 as slave
2022-02-11T23:52:15.873Z ioredis:redis status[127.0.0.1:30004]: [empty] -> wait
2022-02-11T23:52:15.874Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30002 as master
2022-02-11T23:52:15.877Z ioredis:redis status[127.0.0.1:30002]: [empty] -> wait
2022-02-11T23:52:15.877Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30005 as slave
2022-02-11T23:52:15.882Z ioredis:redis status[127.0.0.1:30005]: [empty] -> wait
2022-02-11T23:52:15.883Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30003 as master
2022-02-11T23:52:15.885Z ioredis:redis status[127.0.0.1:30003]: [empty] -> wait
2022-02-11T23:52:15.886Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30006 as slave
2022-02-11T23:52:15.887Z ioredis:redis status[127.0.0.1:30006]: [empty] -> wait
2022-02-11T23:52:15.893Z ioredis:cluster status: connecting -> connect
2022-02-11T23:52:15.904Z ioredis:redis status[127.0.0.1:30002]: wait -> connecting
2022-02-11T23:52:15.906Z ioredis:redis queue command[127.0.0.1:30002]: 0 -> cluster([ 'info' ])
2022-02-11T23:52:15.916Z ioredis:cluster:subscriber subscriber has left, selecting a new one...
2022-02-11T23:52:15.917Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: wait -> close
2022-02-11T23:52:15.918Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.918Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: close -> end
2022-02-11T23:52:15.919Z ioredis:cluster:subscriber selected a subscriber 127.0.0.1:30004
2022-02-11T23:52:15.921Z ioredis:redis status[127.0.0.1:30004 (ioredis-cluster(subscriber))]: [empty] -> wait
2022-02-11T23:52:16.000Z ioredis:connection error: { Error: connect ECONNREFUSED 127.0.0.1:30002
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 30002 }
2022-02-11T23:52:16.030Z ioredis:redis status[127.0.0.1:30002]: connecting -> close
2022-02-11T23:52:16.031Z ioredis:connection skip reconnecting because `retryStrategy` is not a function
2022-02-11T23:52:16.032Z ioredis:redis status[127.0.0.1:30002]: close -> end
2022-02-11T23:52:16.034Z ioredis:cluster:connectionPool Remove 127.0.0.1:30002 from the pool
2022-02-11T23:52:16.036Z ioredis:cluster Ready check failed (Error: Connection is closed.
at close (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:184:25)
at Socket.<anonymous> (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:155:20)
at Object.onceWrapper (events.js:520:26)
at Socket.emit (events.js:400:28)
at Socket.emit (domain.js:470:12)
at TCP.<anonymous> (net.js:675:12)). Reconnecting...
2022-02-11T23:52:16.042Z ioredis:cluster status: connect -> disconnecting
...

The clue to the solution was found in the following log snippet:
2022-02-11T23:52:15.750Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.781Z ioredis:cluster cluster slots result count: 3
2022-02-11T23:52:15.783Z ioredis:cluster cluster slots result [0]: slots 0~5460 served by [ '127.0.0.1:30001', '127.0.0.1:30004' ]
2022-02-11T23:52:15.788Z ioredis:cluster cluster slots result [1]: slots 5461~10922 served by [ '127.0.0.1:30002', '127.0.0.1:30005' ]
2022-02-11T23:52:15.792Z ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '127.0.0.1:30003', '127.0.0.1:30006' ]
The internal redis cluster network was still communicating between nodes on network address 127.0.0.1, creating the connect ECONNREFUSED errors when the ioredis client attempted to use those network mappings to establish the cluster connection.
I had to use the natMap option in the ioredis client to remap the internal cluster network connections to the network address of the docker container:
let natMap = {};
const localHost = "127.0.0.1";
const hosts = process.env.REDIS_HOSTS.split(",").map((connection) => {
const [host, port] = connection.split(":");
// assign nat host address mappings
// when accessing local redis cluster from containerized network
natMap[`${localHost}:${port}`] = { host, port };
return {
host,
port,
};
});
// natMap output
// {
// "127.0.0.1:30001": { host: "docker.internal.network", port: 30001 },
// "127.0.0.1:30002": { host: "docker.internal.network", port: 30002 },
// "127.0.0.1:30003": { host: "docker.internal.network", port: 30003 },
// }
// create redis cluster client
const client = new Redis.Cluster(hosts, {
enableAutoPipelining: true,
slotsRefreshTimeout: 100000,
natMap,
});

Related

can't connect to kafka container in the local network

I am running a zookeeper and kafka instance from the docker yaml file in my ubuntu 18.04
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "test-topic:5:2"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
it is working as
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3d84a6b39f7 wurstmeister/kafka "start-kafka.sh" 3 minutes ago Up 3 minutes 0.0.0.0:49157->9092/tcp desktop_kafka_1
b2012f08b3f9 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 5 hours ago Up 3 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp desktop_zookeeper_1
however the kafka client failed to connect the kafka as
const { Kafka,logLevel,CompressionCodecs,CompressionTypes } = require('kafkajs');
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['localhost:9092'], // tried on ['192.168.1.6:9092']
clientId: 'example-producer',
})
const topic = 'topic-test'
const producer = kafka.producer()
const getRandomNumber = () => Math.round(Math.random(10) * 1000)
const createMessage = num => ({
key: `key-${num}`,
value: `value-${num}-${new Date().toISOString()}`,
})
const sendMessage = () => {
return producer
.send({
topic,
compression: CompressionTypes.GZIP,
messages: Array(getRandomNumber())
.fill()
.map(_ => createMessage(getRandomNumber())),
})
.then(console.log)
.catch(e => console.error(`[example/producer] ${e.message}`, e))
}
const run = async () => {
await producer.connect()
setInterval(sendMessage, 3000)
}
run().catch(e => console.error(`[example/producer] ${e.message}`, e))
the code outputs
[example/producer] Connection error: connect ECONNREFUSED 127.0.0.1:9092 KafkaJSNonRetriableError
Caused by: KafkaJSConnectionError: Connection error: connect ECONNREFUSED 127.0.0.1:9092
at Socket.onError (/home/xsz/Desktop/node_modules/kafkajs/src/network/connection.js:152:23)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
name: 'KafkaJSNumberOfRetriesExceeded',
retriable: false,
helpUrl: undefined,
originalError: KafkaJSConnectionError: Connection error: connect ECONNREFUSED 127.0.0.1:9092
at Socket.onError (/home/xsz/Desktop/node_modules/kafkajs/src/network/connection.js:152:23)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
retriable: true,
helpUrl: undefined,
broker: 'localhost:9092',
code: 'ECONNREFUSED'
},
retryCount: 5,
retryTime: 10304
}
In the docker-compose.ymal, the KAFKA_ADVERTISED_HOST_NAME is configured as localhost or 192.168.1.6 (host machine 's local ip address), both show the same error as above.
note:
using the
ip addr show command outputs
1: lo: <LOOPBACK,UP,LOWER_UP>
inet 127.0.0.1/8 scope host lo
3: wlx08570033e6c1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
inet 192.168.1.6/24 brd 192.168.1.255 scope global noprefixroute wlx08570033e6c1
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
5: br-c66cb3672872: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu inet 172.18.0.1/16 brd 172.18.255.255 scope global br-c66cb3672872
22: br-521b1eb41768: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-521b1eb41768
latest try:
I made a modification on the docker-compose.yaml
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
and running the code on the host machine.
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['172.17.0.1:9092'], // tried on ['192.168.1.6:9092']
clientId: 'example-producer',
})
but still facing the same issue, what is wrong
Some changes need to be made
first, change connection host to localhost
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['localhost:9092'],
clientId: 'example-producer',
})
then inside docker-compose file
change ports and add links
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "test-topic:5:2"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
last you only use one broker at your compose-file
I look at your configuration using Kafka-tool2.0 it seems that there is only one partition for "topic-test" that's why you see this error use more brokers to avoid this
{"level":"DEBUG","timestamp":"2020-12-15T07:54:05.315Z","logger":"kafkajs","message":"[Connection] Request Metadata(key: 3, version: 6)","broker":"localhost:9092","clientId":"example-producer","correlationId":25,"expectResponse":true,"size":47}There is no listener on the leader broker that matches the listener on which metadata request was processed KafkaJSNonRetriableError Caused by: KafkaJSProtocolError: There is no listener on the leader broker that matches the listener on which metadata request was processed.
Tasted on Linux mint 20
output:
{"level":"DEBUG","timestamp":"2020-12-15T06:55:59.662Z","logger":"kafkajs","message":"[Connection] Response Produce(key: 0, version: 7)","broker":"localhost:9092","clientId":"example-producer","correlationId":46,"size":58,"data":{"topics":[{"topicName":"topic-test","partitions":[{"partition":0,"errorCode":0,"baseOffset":"18379","logAppendTime":"-1","logStartOffset":"0"}]}],"throttleTime":0,"clientSideThrottleTime":0}}
[
{
topicName: 'topic-test',
partition: 0,
errorCode: 0,
baseOffset: '18379',
logAppendTime: '-1',
logStartOffset: '0'
}
]
You haven't mapped the port of your kafka instance correctly in you docker compose file you should change
ports:
- "9092"
with
ports:
- "9092:9092"
as now you container gets assigned with an arbitrary port
0.0.0.0:49157->9092/tcp

Connecting to an Azure Redis Cluster using node.js ioredis not working

I've been trying to connect to a Redis three node cluster in Azure using ioredis.
When I connect using the Redis.Cluster constructor:
new Redis.Cluster(['host.redis.cache.windows.net', 6380], {
scaleReads: 'all',
slotsRefreshTimeout: 2000,
redisOptions: {
password: 'some-secret',
tls: true as any
},
});
The error I get is:
2020-06-04T13:05:41.787Z ioredis:cluster getting slot cache from 127.0.0.1:6380
2020-06-04T13:05:41.788Z ioredis:redis status[127.0.0.1:6380 (ioredisClusterRefresher)]: [empty] -> wait
2020-06-04T13:05:41.788Z ioredis:redis status[127.0.0.1:6380 (ioredisClusterRefresher)]: wait -> connecting
2020-06-04T13:05:41.788Z ioredis:redis queue command[127.0.0.1:6380 (ioredisClusterRefresher)]: 0 -> cluster([ 'slots' ])
2020-06-04T13:05:41.790Z ioredis:connection error: Error: connect ECONNREFUSED 127.0.0.1:6380
2020-06-04T13:05:41.791Z ioredis:redis status[127.0.0.1:6380 (ioredisClusterRefresher)]: connecting -> close
2020-06-04T13:05:41.791Z ioredis:connection skip reconnecting because `retryStrategy` is not a function
2020-06-04T13:05:41.791Z ioredis:redis status[127.0.0.1:6380 (ioredisClusterRefresher)]: close -> end
2020-06-04T13:05:41.792Z [auth-middleware] Redis error { ClusterAllFailedError: Failed to refresh slots cache.
at tryNode (/app/node_modules/ioredis/built/cluster/index.js:359:31)
at /app/node_modules/ioredis/built/cluster/index.js:376:21
at duplicatedConnection.cluster.utils_2.timeout (/app/node_modules/ioredis/built/cluster/index.js:624:24)
at run (/app/node_modules/ioredis/built/utils/index.js:156:22)
at tryCatcher (/app/node_modules/standard-as-callback/built/utils.js:11:23)
at promise.then (/app/node_modules/standard-as-callback/built/index.js:30:51)
at process._tickCallback (internal/process/next_tick.js:68:7)
lastNodeError:
Error: Connection is closed.
at close (/app/node_modules/ioredis/built/redis/event_handler.js:179:25)
at TLSSocket.<anonymous> (/app/node_modules/ioredis/built/redis/event_handler.js:150:20)
at Object.onceWrapper (events.js:277:13)
at TLSSocket.emit (events.js:194:15)
at _handle.close (net.js:600:12)
at TCP.done (_tls_wrap.js:388:7) }
When I connect using a non-cluster Redis connection:
new Redis(6380, 'host.redis.cache.windows.net', { password: 'some-secret' });
The error I get is:
020-06-04T15:04:08.609Z ioredis:redis status[10.211.x.x:6380]: connecting -> connect
2020-06-04T15:04:08.614Z ioredis:redis write command[10.211.x.x:6380]: 0 -> auth([ 'some-secret' ])
2020-06-04T15:04:08.616Z ioredis:redis write command[10.211.x.x:6380]: 0 -> info([])
2020-06-04T15:05:16.114Z ioredis:connection error: Error: read ECONNRESET
2020-06-04T15:05:16.115Z [auth-middleware] Redis error { Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
As you can see it is using TLS on port 6380. Azure provides me with one host+port combination and two different access-keys (primary/seconday) - which I find weird, which access-key should I use? Also I'm not sure if I should be connecting in Cluster mode, but I'd prefer to to gain the benefits of clustering. When I do it appears it tries to find the slots at 127.0.0.1:6380 which is probably not correct.
In Azure's quickstart they connect using node_redis with:
var redis = require("redis");
// Add your cache name and access key.
var client = redis.createClient(6380, process.env.REDISCACHEHOSTNAME,
{auth_pass: process.env.REDISCACHEKEY, tls: {servername: process.env.REDISCACHEHOSTNAME}});
I was hoping someone here would have come across the same issue and solved it.
Thanks!
Okay I've managed to connect to Azure Redis Cluster using a non-tls connection:
new Redis.Cluster(['host.redis.cache.windows.net', 3679], {
scaleReads: 'all',
slotsRefreshTimeout: 2000,
redisOptions: {
password: 'some-secret',
},
})
For some reason connecting to 6380 with TLS enabled does not work.

Error! MongoNetworkError: failed to connect to server - after connect to another wifi network

I created an entire, working application with backend in Node.js and Mongodb (Mean stack). Everything is fine, but when I connect to another wifi network there is an error. Please help me.
If I have to send more code from the app, write it.
Error! MongoNetworkError: failed to connect to server [eventsdb-shard-00-00-ydx5k.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to eventsdb-shard-00-00-ydx5k.mongodb.net:27017 closed
at TLSSocket.<anonymous>
at Object.onceWrapper (events.js:300:26)
at TLSSocket.emit (events.js:210:5)
at net.js:659:12
at TCP.done (_tls_wrap.js:481:7) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
}]
...\App\server\node_modules\mongodb\lib\core\connection\connection.js:372:9 :
function closeHandler(conn) {
return function(hadError) {
if (connectionAccounting) deleteConnection(conn.id);
if (conn.logger.isDebug()) {
conn.logger.debug(`connection ${conn.id} with for [${conn.address}] closed`);
}
if (!hadError) {
conn.emit(
'close',
new MongoNetworkError(`connection ${conn.id} to ${conn.address} closed`), // <------ 372
conn
);
}
};
}
I solved my problem. On the Mongodb website, where I have a database, I entered "Network Access" -> "Add IP Address", I added my new ip address and there was no error.

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Error while sending query request from client : No peer available to query

I am getting the following error while sending query request from my client.
FabricError: No peers available to query. Errors: ["Failed to connect before the deadline
URL:grpcs://localhost:12051","Failed to connect before the deadline
URL:grpcs://localhost:11051"].
Following is my the part of my connection-org3.json connection profile file
"organizations": {
"Org3": {
"mspid": "Org3MSP",
"peers": [
"peer0.org3.bc4scm.de",
"peer1.org3.bc4scm.de"
],
"certificateAuthorities": [
"ca.org3.bc4scm.de"
]
}
},
"peers": {
"peer0.org3.bc4scm.de": {
"url": "grpcs://localhost:11051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer0.org3.bc4scm.de"
}
},
"peer1.org3.bc4scm.de": {
"url": "grpcs://localhost:12051",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/supplier.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"grpcOptions": {
"ssl-target-name-override": "peer1.org3.bc4scm.de"
}
}
},
"certificateAuthorities": {
"ca.org3.bc4scm.de": {
"url": "https://localhost:9054",
"caName": "ca-supplier",
"tlsCACerts": {
"path": "crypto-config/peerOrganizations/org3.bc4scm.de/tlsca/tlsca.org3.bc4scm.de-cert.pem"
},
"httpOptions": {
"verify": false
}
}
}
And following is a part of my docker composer file.
peer0.org3.bc4scm.de:
container_name: peer0.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer0.org3.bc4scm.de:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer0.org3.bc4scm.de:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:12051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:11051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer0.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer0.org3.bc4scm.de:/var/hyperledger/production
ports:
- 11051:11051
peer1.org3.bc4scm.de:
container_name: peer1.org3.bc4scm.de
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org3.bc4scm.de
- CORE_PEER_ADDRESS=peer1.org3.bc4scm.de:12051
- CORE_PEER_LISTENADDRESS=0.0.0.0:12051
- CORE_PEER_CHAINCODEADDRESS=peer1.org3.bc4scm.de:12052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:12052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:11051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:12051
- CORE_PEER_LOCALMSPID=Org3MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org3.bc4scm.de/peers/peer1.org3.bc4scm.de/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/supplier.bc4scm.de/peers/peer1.org3.bc4scm.de/tls:/etc/hyperledger/fabric/tls
- peer1.org3.bc4scm.de:/var/hyperledger/production
ports:
- 12051:12051
I got this code from Fabcar sample and tried to query from a client in Org3 instead of Org1. I created an admin user and then created a user in this organization successfully. According to my observations, I am getting the error from following code line execution.
const result = await contract.evaluateTransaction('queryAllProducts','123');
What is the possible reason for this issue? Appreciate your insights on this.
Updates:
I checked opened ports in peer0.prg3.bs4scm.de
root#e52992a76c3d:/opt/gopath/src/github.com/hyperledger/fabric/peer# netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:9443 0.0.0.0:* LISTEN 1/peer
tcp 0 0 127.0.0.11:46353 0.0.0.0:* LISTEN -
tcp6 0 0 :::11051 :::* LISTEN 1/peer
tcp6 0 0 :::6060 :::* LISTEN 1/peer
tcp6 0 0 :::11052 :::* LISTEN 1/peer
Here I can see ports 11051 and 11052 are open and listening.
Also, there is a container for the installed chain code.
cd0b165e5186 dev-peer0.org3.bc4scm.de-scmlogic-1.0-9c7e776aa8a752e530f79d0b456f1bda28aac3f5db0af734be2f315d8d1a4f53 "/bin/sh -c 'cd /usr…" 48 seconds ago Up 47 seconds dev-peer0.org3.bc4scm.de-scmlogic-1.0
When I look at the logs of that peer(peer0.org3) I can see floowing error log is print continuously. It is complaining about the connection with org1
019-07-06 10:26:52.278 UTC [gossip.discovery] expireDeadMembers -> WARN 164 Exiting
2019-07-06 10:26:56.381 UTC [gossip.comm] func1 -> WARN 165 peer1.org1.bc4scm.de:8051, PKIid:42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7 isn't responsive: EOF
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 166 Entering [42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a9d0441d772925882d4457a7]
2019-07-06 10:26:56.381 UTC [gossip.discovery] expireDeadMembers -> WARN 167 Closing connection to Endpoint: peer1.org1.bc4scm.de:8051, InternalEndpoint: , PKI-ID: 42214b7584f3fabcdb84e5770c62e4cf0f7c00b2a
You could check, if peer is accessible even using browser(Firefox). request on firefox - localhost:11051 if you could see the response means your peer is accessible or if not means your port is not open for the same, then go to the docker file and open the port for the same, and up the peer using docker compose , do the same for every peer you want to access.
Even you could check the logs of peers using following -
docker logs --follow peer0.org3.bc4scm.de
Update : ---
You could check CORE_PEER_GOSSIP_BOOTSTRAP & CORE_PEER_GOSSIP_EXTERNALENDPOINT for both peers
**CORE_PEER_GOSSIP_BOOTSTRAP=<a list of peer endpoints within the peer's org>
CORE_PEER_GOSSIP_EXTERNALENDPOINT=<the peer endpoint, as known outside the org>**
for peer0.org3.bc4scm.de
CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org3.bc4scm.de:12051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org3.bc4scm.de:11051
for peer1.org3.bc4scm.de :
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org3.bc4scm.de:11051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org3.bc4scm.de:12051
Check Port accordingly your peers & up your docker file.
This could be due to multiple reasons:
Your peers are not accessible so first check if these ports are open or not.
You should confirm if the chaincode is installed on these peers or not.
If these are not the cases then you must check the logs inside the docker containers of the chaincode and these peers and for that you can use:
docker exec -it [container-name] bash
Do tell me if you find something there and you can't resolve it.
I had this same problem and realized the issue was I ha.d set the "asLocalhost" property to false and was trying to access peers at http://localhost/. Below is the working line with the property set correctly. (I pulled from an example using fabcar, which was great otherwise).
await gateway.connect(ccpPath, { wallet, identity: 'user1', discovery: { enabled: true, asLocalhost: true } });

Resources