can't connect to kafka container in the local network - node.js

I am running a zookeeper and kafka instance from the docker yaml file in my ubuntu 18.04
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "test-topic:5:2"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
it is working as
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3d84a6b39f7 wurstmeister/kafka "start-kafka.sh" 3 minutes ago Up 3 minutes 0.0.0.0:49157->9092/tcp desktop_kafka_1
b2012f08b3f9 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 5 hours ago Up 3 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp desktop_zookeeper_1
however the kafka client failed to connect the kafka as
const { Kafka,logLevel,CompressionCodecs,CompressionTypes } = require('kafkajs');
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['localhost:9092'], // tried on ['192.168.1.6:9092']
clientId: 'example-producer',
})
const topic = 'topic-test'
const producer = kafka.producer()
const getRandomNumber = () => Math.round(Math.random(10) * 1000)
const createMessage = num => ({
key: `key-${num}`,
value: `value-${num}-${new Date().toISOString()}`,
})
const sendMessage = () => {
return producer
.send({
topic,
compression: CompressionTypes.GZIP,
messages: Array(getRandomNumber())
.fill()
.map(_ => createMessage(getRandomNumber())),
})
.then(console.log)
.catch(e => console.error(`[example/producer] ${e.message}`, e))
}
const run = async () => {
await producer.connect()
setInterval(sendMessage, 3000)
}
run().catch(e => console.error(`[example/producer] ${e.message}`, e))
the code outputs
[example/producer] Connection error: connect ECONNREFUSED 127.0.0.1:9092 KafkaJSNonRetriableError
Caused by: KafkaJSConnectionError: Connection error: connect ECONNREFUSED 127.0.0.1:9092
at Socket.onError (/home/xsz/Desktop/node_modules/kafkajs/src/network/connection.js:152:23)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
name: 'KafkaJSNumberOfRetriesExceeded',
retriable: false,
helpUrl: undefined,
originalError: KafkaJSConnectionError: Connection error: connect ECONNREFUSED 127.0.0.1:9092
at Socket.onError (/home/xsz/Desktop/node_modules/kafkajs/src/network/connection.js:152:23)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
retriable: true,
helpUrl: undefined,
broker: 'localhost:9092',
code: 'ECONNREFUSED'
},
retryCount: 5,
retryTime: 10304
}
In the docker-compose.ymal, the KAFKA_ADVERTISED_HOST_NAME is configured as localhost or 192.168.1.6 (host machine 's local ip address), both show the same error as above.
note:
using the
ip addr show command outputs
1: lo: <LOOPBACK,UP,LOWER_UP>
inet 127.0.0.1/8 scope host lo
3: wlx08570033e6c1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
inet 192.168.1.6/24 brd 192.168.1.255 scope global noprefixroute wlx08570033e6c1
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
5: br-c66cb3672872: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu inet 172.18.0.1/16 brd 172.18.255.255 scope global br-c66cb3672872
22: br-521b1eb41768: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-521b1eb41768
latest try:
I made a modification on the docker-compose.yaml
KAFKA_ADVERTISED_HOST_NAME: 172.17.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
and running the code on the host machine.
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['172.17.0.1:9092'], // tried on ['192.168.1.6:9092']
clientId: 'example-producer',
})
but still facing the same issue, what is wrong

Some changes need to be made
first, change connection host to localhost
const kafka = new Kafka({
logLevel: logLevel.DEBUG,
brokers: ['localhost:9092'],
clientId: 'example-producer',
})
then inside docker-compose file
change ports and add links
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "test-topic:5:2"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
last you only use one broker at your compose-file
I look at your configuration using Kafka-tool2.0 it seems that there is only one partition for "topic-test" that's why you see this error use more brokers to avoid this
{"level":"DEBUG","timestamp":"2020-12-15T07:54:05.315Z","logger":"kafkajs","message":"[Connection] Request Metadata(key: 3, version: 6)","broker":"localhost:9092","clientId":"example-producer","correlationId":25,"expectResponse":true,"size":47}There is no listener on the leader broker that matches the listener on which metadata request was processed KafkaJSNonRetriableError Caused by: KafkaJSProtocolError: There is no listener on the leader broker that matches the listener on which metadata request was processed.
Tasted on Linux mint 20
output:
{"level":"DEBUG","timestamp":"2020-12-15T06:55:59.662Z","logger":"kafkajs","message":"[Connection] Response Produce(key: 0, version: 7)","broker":"localhost:9092","clientId":"example-producer","correlationId":46,"size":58,"data":{"topics":[{"topicName":"topic-test","partitions":[{"partition":0,"errorCode":0,"baseOffset":"18379","logAppendTime":"-1","logStartOffset":"0"}]}],"throttleTime":0,"clientSideThrottleTime":0}}
[
{
topicName: 'topic-test',
partition: 0,
errorCode: 0,
baseOffset: '18379',
logAppendTime: '-1',
logStartOffset: '0'
}
]

You haven't mapped the port of your kafka instance correctly in you docker compose file you should change
ports:
- "9092"
with
ports:
- "9092:9092"
as now you container gets assigned with an arbitrary port
0.0.0.0:49157->9092/tcp

Related

ioredis connection keeps resetting when connecting to local redis cluster from docker container

I have a docker compose containerized client/server node app that is failing to create a stable connection to a redis cluster I have running on my local environment. The redis cluster has 6 nodes (3 master, 3 replica configuration) running on my local machine. Every time I start my app and attempt to connect to redis, the connect event is spammed and I get the following error on my client:
Proxy error: Could not proxy request /check-login from localhost:3000 to http://server.
See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNRESET)
I have made sure to configure the redis cluster to have the settings protected-mode to no and bind to 0.0.0.0 to allow remote access to the cluster. I have confirmed that I can access the cluster locally by pinging one of the cluster nodes redis-cli -h 127.0.0.1 -p 30001:
127.0.0.1:30001> ping
PONG
127.0.0.1:30001> exit
I am setting my REDIS_HOSTS environment to be "host.docker.internal:30001,host.docker.internal:30005,host.docker.internal:30006". This should allow me to connect to my redis cluster running at 127.0.0.1 on my host machine. My node application code:
const Redis = require("ioredis");
const hosts = process.env.REDIS_HOSTS.split(",").map((connection) => {
const [host, port] = connection.split(":");
return {
host,
port,
};
});
const client = new Redis.Cluster(hosts, {
enableAutoPipelining: true,
slotsRefreshTimeout: 100000,
});
client.on("error", (error) => {
console.log("redis connection failed: ", error);
});
client.on("connect", () => {
console.log("redis connection established");
});
module.exports = client;
My ioredis logs:
2022-02-11T23:52:09.970Z ioredis:cluster status: [empty] -> connecting
info: Listening on port 80
2022-02-11T23:52:15.449Z ioredis:cluster resolved hostname host.docker.internal to IP 192.168.65.2
2022-02-11T23:52:15.476Z ioredis:cluster:connectionPool Reset with [
{ host: '192.168.65.2', port: 30001 },
{ host: '192.168.65.2', port: 30002 },
{ host: '192.168.65.2', port: 30003 }
]
2022-02-11T23:52:15.482Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30001 as master
2022-02-11T23:52:15.504Z ioredis:redis status[192.168.65.2:30001]: [empty] -> wait
2022-02-11T23:52:15.511Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30002 as master
2022-02-11T23:52:15.517Z ioredis:redis status[192.168.65.2:30002]: [empty] -> wait
2022-02-11T23:52:15.519Z ioredis:cluster:connectionPool Connecting to 192.168.65.2:30003 as master
2022-02-11T23:52:15.521Z ioredis:redis status[192.168.65.2:30003]: [empty] -> wait
2022-02-11T23:52:15.530Z ioredis:cluster getting slot cache from 192.168.65.2:30002
2022-02-11T23:52:15.541Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: [empty] -> wait
2022-02-11T23:52:15.590Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: wait -> connecting
2022-02-11T23:52:15.603Z ioredis:redis queue command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.614Z ioredis:cluster:subscriber selected a subscriber 192.168.65.2:30001
2022-02-11T23:52:15.621Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: [empty] -> wait
2022-02-11T23:52:15.622Z ioredis:cluster:subscriber started
2022-02-11T23:52:15.734Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: connecting -> connect
2022-02-11T23:52:15.737Z ioredis:redis status[192.168.65.2:30002 (ioredis-cluster(refresher))]: connect -> ready
2022-02-11T23:52:15.739Z ioredis:connection set the connection name [ioredis-cluster(refresher)]
2022-02-11T23:52:15.742Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> client([ 'setname', 'ioredis-cluster(refresher)' ])
2022-02-11T23:52:15.749Z ioredis:connection send 1 commands in offline queue
2022-02-11T23:52:15.750Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.781Z ioredis:cluster cluster slots result count: 3
2022-02-11T23:52:15.783Z ioredis:cluster cluster slots result [0]: slots 0~5460 served by [ '127.0.0.1:30001', '127.0.0.1:30004' ]
2022-02-11T23:52:15.788Z ioredis:cluster cluster slots result [1]: slots 5461~10922 served by [ '127.0.0.1:30002', '127.0.0.1:30005' ]
2022-02-11T23:52:15.792Z ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '127.0.0.1:30003', '127.0.0.1:30006' ]
2022-02-11T23:52:15.849Z ioredis:cluster:connectionPool Reset with [
{ host: '127.0.0.1', port: 30001, readOnly: false },
{ host: '127.0.0.1', port: 30004, readOnly: true },
{ host: '127.0.0.1', port: 30002, readOnly: false },
{ host: '127.0.0.1', port: 30005, readOnly: true },
{ host: '127.0.0.1', port: 30003, readOnly: false },
{ host: '127.0.0.1', port: 30006, readOnly: true }
]
2022-02-11T23:52:15.850Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30001 because the node does not hold any slot
2022-02-11T23:52:15.851Z ioredis:redis status[192.168.65.2:30001]: wait -> close
2022-02-11T23:52:15.851Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.852Z ioredis:redis status[192.168.65.2:30001]: close -> end
2022-02-11T23:52:15.857Z ioredis:cluster:connectionPool Remove 192.168.65.2:30001 from the pool
2022-02-11T23:52:15.858Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30002 because the node does not hold any slot
2022-02-11T23:52:15.858Z ioredis:redis status[192.168.65.2:30002]: wait -> close
2022-02-11T23:52:15.859Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.859Z ioredis:redis status[192.168.65.2:30002]: close -> end
2022-02-11T23:52:15.861Z ioredis:cluster:connectionPool Remove 192.168.65.2:30002 from the pool
2022-02-11T23:52:15.861Z ioredis:cluster:connectionPool Disconnect 192.168.65.2:30003 because the node does not hold any slot
2022-02-11T23:52:15.861Z ioredis:redis status[192.168.65.2:30003]: wait -> close
2022-02-11T23:52:15.865Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.866Z ioredis:redis status[192.168.65.2:30003]: close -> end
2022-02-11T23:52:15.866Z ioredis:cluster:connectionPool Remove 192.168.65.2:30003 from the pool
2022-02-11T23:52:15.867Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30001 as master
2022-02-11T23:52:15.869Z ioredis:redis status[127.0.0.1:30001]: [empty] -> wait
2022-02-11T23:52:15.871Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30004 as slave
2022-02-11T23:52:15.873Z ioredis:redis status[127.0.0.1:30004]: [empty] -> wait
2022-02-11T23:52:15.874Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30002 as master
2022-02-11T23:52:15.877Z ioredis:redis status[127.0.0.1:30002]: [empty] -> wait
2022-02-11T23:52:15.877Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30005 as slave
2022-02-11T23:52:15.882Z ioredis:redis status[127.0.0.1:30005]: [empty] -> wait
2022-02-11T23:52:15.883Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30003 as master
2022-02-11T23:52:15.885Z ioredis:redis status[127.0.0.1:30003]: [empty] -> wait
2022-02-11T23:52:15.886Z ioredis:cluster:connectionPool Connecting to 127.0.0.1:30006 as slave
2022-02-11T23:52:15.887Z ioredis:redis status[127.0.0.1:30006]: [empty] -> wait
2022-02-11T23:52:15.893Z ioredis:cluster status: connecting -> connect
2022-02-11T23:52:15.904Z ioredis:redis status[127.0.0.1:30002]: wait -> connecting
2022-02-11T23:52:15.906Z ioredis:redis queue command[127.0.0.1:30002]: 0 -> cluster([ 'info' ])
2022-02-11T23:52:15.916Z ioredis:cluster:subscriber subscriber has left, selecting a new one...
2022-02-11T23:52:15.917Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: wait -> close
2022-02-11T23:52:15.918Z ioredis:connection skip reconnecting since the connection is manually closed.
2022-02-11T23:52:15.918Z ioredis:redis status[192.168.65.2:30001 (ioredis-cluster(subscriber))]: close -> end
2022-02-11T23:52:15.919Z ioredis:cluster:subscriber selected a subscriber 127.0.0.1:30004
2022-02-11T23:52:15.921Z ioredis:redis status[127.0.0.1:30004 (ioredis-cluster(subscriber))]: [empty] -> wait
2022-02-11T23:52:16.000Z ioredis:connection error: { Error: connect ECONNREFUSED 127.0.0.1:30002
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 30002 }
2022-02-11T23:52:16.030Z ioredis:redis status[127.0.0.1:30002]: connecting -> close
2022-02-11T23:52:16.031Z ioredis:connection skip reconnecting because `retryStrategy` is not a function
2022-02-11T23:52:16.032Z ioredis:redis status[127.0.0.1:30002]: close -> end
2022-02-11T23:52:16.034Z ioredis:cluster:connectionPool Remove 127.0.0.1:30002 from the pool
2022-02-11T23:52:16.036Z ioredis:cluster Ready check failed (Error: Connection is closed.
at close (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:184:25)
at Socket.<anonymous> (/usr/src/app/node_modules/ioredis/built/redis/event_handler.js:155:20)
at Object.onceWrapper (events.js:520:26)
at Socket.emit (events.js:400:28)
at Socket.emit (domain.js:470:12)
at TCP.<anonymous> (net.js:675:12)). Reconnecting...
2022-02-11T23:52:16.042Z ioredis:cluster status: connect -> disconnecting
...
The clue to the solution was found in the following log snippet:
2022-02-11T23:52:15.750Z ioredis:redis write command[192.168.65.2:30002 (ioredis-cluster(refresher))]: 0 -> cluster([ 'slots' ])
2022-02-11T23:52:15.781Z ioredis:cluster cluster slots result count: 3
2022-02-11T23:52:15.783Z ioredis:cluster cluster slots result [0]: slots 0~5460 served by [ '127.0.0.1:30001', '127.0.0.1:30004' ]
2022-02-11T23:52:15.788Z ioredis:cluster cluster slots result [1]: slots 5461~10922 served by [ '127.0.0.1:30002', '127.0.0.1:30005' ]
2022-02-11T23:52:15.792Z ioredis:cluster cluster slots result [2]: slots 10923~16383 served by [ '127.0.0.1:30003', '127.0.0.1:30006' ]
The internal redis cluster network was still communicating between nodes on network address 127.0.0.1, creating the connect ECONNREFUSED errors when the ioredis client attempted to use those network mappings to establish the cluster connection.
I had to use the natMap option in the ioredis client to remap the internal cluster network connections to the network address of the docker container:
let natMap = {};
const localHost = "127.0.0.1";
const hosts = process.env.REDIS_HOSTS.split(",").map((connection) => {
const [host, port] = connection.split(":");
// assign nat host address mappings
// when accessing local redis cluster from containerized network
natMap[`${localHost}:${port}`] = { host, port };
return {
host,
port,
};
});
// natMap output
// {
// "127.0.0.1:30001": { host: "docker.internal.network", port: 30001 },
// "127.0.0.1:30002": { host: "docker.internal.network", port: 30002 },
// "127.0.0.1:30003": { host: "docker.internal.network", port: 30003 },
// }
// create redis cluster client
const client = new Redis.Cluster(hosts, {
enableAutoPipelining: true,
slotsRefreshTimeout: 100000,
natMap,
});

mangoose trying to connect wrong ip and showing err connection refused during dockerization

my docker-compose.yml
version: '3.5' # specify docker-compose version
# Define the services/ containers to be run
services:
angular: # name of the first service
build: frontend # specify the directory of the Dockerfile
ports:
- "80:80" # specify port mapping
express: # name of the second service
build: backend # specify the directory of the Dockerfile
restart: always
ports:
- "100:3000" #specify ports mapping
links:
- mongo # link this service to the database service
mongo: # name of the third service
image: mongo # specify image to build container from
container_name: mongo
ports:
- "27017:27017" # specify port forwarding
when i issue docker-compose up express
in server its showing error in this line
mongoose.connect('mongodb://mongo:27017/mydb', {useNewUrlParser: true})
error is
express_1 | { MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 172.18.0.2:27017]
express_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
express_1 | at Pool.emit (events.js:198:13)
express_1 | at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:561:14)
express_1 | at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:994:11)
express_1 | at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:31:7)
express_1 | at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
express_1 | at Object.onceWrapper (events.js:286:20)
express_1 | at Socket.emit (events.js:198:13)
express_1 | at emitErrorNT (internal/streams/destroy.js:91:8)
express_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
express_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
express_1 | name: 'MongoNetworkError',
express_1 | [Symbol(mongoErrorContextSymbol)]: {} }
why its trying to connect 172.18.0.2:27017 instead of localhost:27017 or docker(192.168.99.100:27017)
thanks in advance scratching my head from last 2 days

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

How to pass the SSL config like truststore location and password?

Right now I have configured my Kafka server with a self-signed certificate.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- 2181:2181
hostname: zookeeper
kafka:
image: wurstmeister/kafka:2.11-2.0.0
command: [start-kafka.sh]
ports:
- 9093:9093
hostname: kafka
environment:
KAFKA_LISTENERS: SSL://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: SSL://alfrescokafka.leafycode.com:9093
KAFKA_SSL_KEYSTORE_LOCATION: /home/amur42s/ssl/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: oE4KJ9FVMjMXGpgpp0qwLzUDy0uz
KAFKA_SSL_KEY_PASSWORD: oE4KJ9FVMjMXGpgpp0qwLzUDy0uz
KAFKA_SSL_TRUSTSTORE_LOCATION: /home/amur42s/ssl/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: 123
KAFKA_ADVERTISED_HOST_NAME: 116.203.65.132 # docker-machine ip
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_CREATE_TOPICS: ""
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: 'SSL'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/ssl:/home/ssl
depends_on:
- "zookeeper"
Unfortunately, I'm unable to connect to it using Kafka-node TimeoutError: Request timed out after 30000ms. Looks like I need to set the ssl.truststore.location and ssl.trsutstore.password. How can I do this?
export const kafkaClientOptions = {
kafkaHost: process.env.KAFKA_PRODUCER_HOST,
ssl: true,
sslOptions: {
rejectUnauthorized: false
}
};
const client = new kafka.KafkaClient(kafkaClientOptions);
const Producer = kafka.Producer;
const producer = new Producer(client);
you shouldn't get timeout error because of SSL miss-configuration, but here are the configs to setup Kafka client with SSL
ssl:true,
sslOptions: {
key: fileManager.readFile("path/to/key"),
cert: fileManager.readFile("path/to/cert"),
ca: fileManager.readFile("path/to/ca"),
passphrase: "your_passphrase"
}

Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306

I'm trying to get my nodejs application up and running using a docker container. I have no clue what might be wrong. The credentials seems to be passed correctly when I debug the credentials with the console. Also firing up sequel pro and connecting directly with the same username and password seems to work. When node starts in the container I get the error message:
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
The application itself is loading correctly on port 3000, however no data is retrieved from the database. If have also tried adding the environment variables directly to the docker compose file, but this also doesn't seem to work.
My project code is hosted over here: https://github.com/pietheinstrengholt/rssmonster
The following database.js configuration is used. When I add console.log(config) the correct credentials from the .env file are displayed.
require('dotenv').load();
const Sequelize = require('sequelize');
const fs = require('fs');
const path = require('path');
const env = process.env.NODE_ENV || 'development';
const config = require(path.join(__dirname + '/../config/config.js'))[env];
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
module.exports = sequelize;
When I do a console.log(config) inside the database.js I get the following output:
{
username: 'rssmonster',
password: 'password',
database: 'rssmonster',
host: 'localhost',
dialect: 'mysql'
}
Following .env:
DB_HOSTNAME=localhost
DB_PORT=3306
DB_DATABASE=rssmonster
DB_USERNAME=rssmonster
DB_PASSWORD=password
And the following docker-compose.yml:
version: '2.3'
services:
app:
depends_on:
mysql:
condition: service_healthy
build:
context: ./
dockerfile: app.dockerfile
image: rssmonster/app
ports:
- 3000:3000
environment:
NODE_ENV: development
PORT: 3000
DB_USERNAME: rssmonster
DB_PASSWORD: password
DB_DATABASE: rssmonster
DB_HOSTNAME: localhost
working_dir: /usr/local/rssmonster/server
env_file:
- ./server/.env
links:
- mysql:mysql
mysql:
container_name: mysqldb
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: "rssmonster"
MYSQL_USER: "rssmonster"
MYSQL_PASSWORD: "password"
ports:
- "3307:3306"
volumes:
- /var/lib/mysql
restart: unless-stopped
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 10
volumes:
dbdata:
Error output:
{ SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
app_1 | at Promise.tap.then.catch.err (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:128:19)
app_1 | From previous event:
app_1 | at ConnectionManager.connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:125:13)
app_1 | at sequelize.runHooks.then (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:50)
app_1 | From previous event:
app_1 | at ConnectionManager._connect (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:306:8)
app_1 | at ConnectionManager.getConnection (/usr/local/rssmonster/server/node_modules/sequelize/lib/dialects/abstract/connection-manager.js:247:46)
app_1 | at Promise.try (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:564:34)
app_1 | From previous event:
app_1 | at Promise.resolve.retryParameters (/usr/local/rssmonster/server/node_modules/sequelize/lib/sequelize.js:464:64)
app_1 | at /usr/local/rssmonster/server/node_modules/retry-as-promised/index.js:60:21
app_1 | at new Promise (<anonymous>)
Insteaf of localhost point to mysql which is the service name (DNS) that nodejs will resolve into the MySQL container:
DB_HOSTNAME: mysql
And
{
...
host: 'mysql',
...
}
Inside of the container you should reference the container by the name you gave in your docker-compose.yml file.
In this case you should use
DB_HOSTNAME: mysql
After searching and digging up through several googling attempt, the culprit of the problem soon appear. In this context, the database server is not in the same machine. In other words, the MySQL Database Server address is not localhost. So, how can the above MySQL database configuration by default is pointing to localhost address. Well, it seems that if there is no further definition of the host address, it will connect to the localhost address by default. Read the article for further reference about sequelize syntax pattern in this link.
So, in order to solve the problem, just modify the file with the right configuration database. The following is the correction of the configuration database :
const sequelize = require("sequelize")
const db = new sequelize("db_master","db_user","password", {
host : "10.0.2.2",
dialect: "mysql"
});
db.sync({});
module.exports = db;
Actually, the NodeJS application is running in a virtual server. It is a guest machine run in a VirtualBox application. On the other hand, MySQL Database server exist outside the guest machine. It is available in the host machine where the VirtualBox application is running. The host machine IP address is 10.0.2.2. So, in order to connect to MySQL Database Server in the host machine, the IP address of the host is 10.0.2.2.
use your connection string as :
mysql://username:password#mysql:(port_running_on_container)or(exposed_port)/db_name
Answers already exist, but to provide some further explanation:
You can't use 127.0.0.1 (localhost) to access other services/containers since each container will view that as inside itself. When running docker-compose, all your services will be entered into the same docker network. All services inside the same docker network, are able to reach eachother by service name.
hence, as already stated in previous answers: in your configuration, change db hostname from localhost to mysql.
three things to check before
make sure your service name must be MySQL
in Configure DB_HOST also a MySQL
And your backend service depends on mysql in docker-compose.yml
here is my success code
export const db = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
port: process.env.DB_PORT,
host:'mysql',
dialect: "mysql",
logging: false,
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
}
);

Resources