I'm using kafka-node to send messages between the services in my application. I'm using the wurstmeister/kafka and wurstmeister/zookeeper containers to run Kafka and Zookeeper (for simplicity's sake).
Every time I try to send() a message to the broker, I get a LeaderNotAvailable error. I used kafka-web-console to inspect the broker, and it shows the two topics that I had set up, but it shows no leader (see http://cl.ly/image/1f3g0K2S1J0F).
Neither from reading the container logs (below), nor any documentation, nor trying to search for this problem can I work out why it's not working (or how to fix it).
Update sending seems to work if I destroy the containers and start them up again (rather than just restarting them using docker-compose up).
zookeeper_1 | JMX enabled by default
zookeeper_1 | Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
zookeeper_1 | 2015-09-01 10:44:15,108 [myid:] - INFO [main:QuorumPeerConfig#103] - Reading configuration from: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
zookeeper_1 | 2015-09-01 10:44:15,113 [myid:] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
zookeeper_1 | 2015-09-01 10:44:15,113 [myid:] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 1
zookeeper_1 | 2015-09-01 10:44:15,119 [myid:] - WARN [main:QuorumPeerMain#113] - Either no config or no quorum defined in config, running in standalone mode
zookeeper_1 | 2015-09-01 10:44:15,119 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask#138] - Purge task started.
zookeeper_1 | 2015-09-01 10:44:15,133 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask#144] - Purge task completed.
zookeeper_1 | 2015-09-01 10:44:15,135 [myid:] - INFO [main:QuorumPeerConfig#103] - Reading configuration from: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
zookeeper_1 | 2015-09-01 10:44:15,135 [myid:] - INFO [main:ZooKeeperServerMain#95] - Starting server
zookeeper_1 | 2015-09-01 10:44:15,143 [myid:] - INFO [main:Environment#100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
zookeeper_1 | 2015-09-01 10:44:15,143 [myid:] - INFO [main:Environment#100] - Server environment:host.name=abd919b6a5a2
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.version=1.7.0_65
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.vendor=Oracle Corporation
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.class.path=/opt/zookeeper-3.4.6/bin/../build/classes:/opt/zookeeper-3.4.6/bin/../build/lib/*.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/opt/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.6/bin/../conf:
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper_1 | 2015-09-01 10:44:15,144 [myid:] - INFO [main:Environment#100] - Server environment:java.io.tmpdir=/tmp
zookeeper_1 | 2015-09-01 10:44:15,146 [myid:] - INFO [main:Environment#100] - Server environment:java.compiler=<NA>
zookeeper_1 | 2015-09-01 10:44:15,146 [myid:] - INFO [main:Environment#100] - Server environment:os.name=Linux
zookeeper_1 | 2015-09-01 10:44:15,146 [myid:] - INFO [main:Environment#100] - Server environment:os.arch=amd64
zookeeper_1 | 2015-09-01 10:44:15,146 [myid:] - INFO [main:Environment#100] - Server environment:os.version=3.18.11-tinycore64
zookeeper_1 | 2015-09-01 10:44:15,147 [myid:] - INFO [main:Environment#100] - Server environment:user.name=root
zookeeper_1 | 2015-09-01 10:44:15,147 [myid:] - INFO [main:Environment#100] - Server environment:user.home=/root
zookeeper_1 | 2015-09-01 10:44:15,147 [myid:] - INFO [main:Environment#100] - Server environment:user.dir=/opt/zookeeper-3.4.6
zookeeper_1 | 2015-09-01 10:44:15,148 [myid:] - INFO [main:ZooKeeperServer#755] - tickTime set to 2000
zookeeper_1 | 2015-09-01 10:44:15,148 [myid:] - INFO [main:ZooKeeperServer#764] - minSessionTimeout set to -1
zookeeper_1 | 2015-09-01 10:44:15,148 [myid:] - INFO [main:ZooKeeperServer#773] - maxSessionTimeout set to -1
zookeeper_1 | 2015-09-01 10:44:15,161 [myid:] - INFO [main:NIOServerCnxnFactory#94] - binding to port 0.0.0.0/0.0.0.0:2181
kafka_1 | [2015-09-01 10:44:16,754] INFO Verifying properties (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,782] INFO Property advertised.host.name is overridden to 192.168.59.103 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,782] INFO Property advertised.port is overridden to 32773 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,782] INFO Property broker.id is overridden to 32773 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,782] INFO Property log.cleaner.enable is overridden to false (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property log.dirs is overridden to /kafka/kafka-logs-32773 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property log.retention.check.interval.ms is overridden to 300000 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property log.segment.bytes is overridden to 1073741824 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,783] INFO Property num.network.threads is overridden to 3 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,784] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,784] INFO Property num.recovery.threads.per.data.dir is overridden to 1 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,785] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,785] INFO Property socket.receive.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,785] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,785] INFO Property socket.send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,785] WARN Property version is not valid (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,786] INFO Property zookeeper.connect is overridden to 172.17.0.10:2181 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,786] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
kafka_1 | [2015-09-01 10:44:16,817] INFO [Kafka Server 32773], starting (kafka.server.KafkaServer)
kafka_1 | [2015-09-01 10:44:16,820] INFO [Kafka Server 32773], Connecting to zookeeper on 172.17.0.10:2181 (kafka.server.KafkaServer)
kafka_1 | [2015-09-01 10:44:16,828] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka_1 | [2015-09-01 10:44:16,837] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,837] INFO Client environment:host.name=34ff07e953a2 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,837] INFO Client environment:java.version=1.6.0_34 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,837] INFO Client environment:java.vendor=Sun Microsystems Inc. (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,837] INFO Client environment:java.home=/usr/lib/jvm/java-6-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:java.class.path=:/opt/kafka_2.10-0.8.2.1/bin/../core/build/dependant-libs-2.10*/*.jar:/opt/kafka_2.10-0.8.2.1/bin/../examples/build/libs//kafka-examples*.jar:/opt/kafka_2.10-0.8.2.1/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/opt/kafka_2.10-0.8.2.1/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/opt/kafka_2.10-0.8.2.1/bin/../clients/build/libs/kafka-clients*.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/jopt-simple-3.2.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka-clients-0.8.2.1.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka_2.10-0.8.2.1-javadoc.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka_2.10-0.8.2.1-scaladoc.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka_2.10-0.8.2.1-sources.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka_2.10-0.8.2.1-test.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/kafka_2.10-0.8.2.1.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/log4j-1.2.16.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/lz4-1.2.0.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/scala-library-2.10.4.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/slf4j-api-1.7.6.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/slf4j-log4j12-1.6.1.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/snappy-java-1.1.1.6.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/zkclient-0.3.jar:/opt/kafka_2.10-0.8.2.1/bin/../libs/zookeeper-3.4.6.jar:/opt/kafka_2.10-0.8.2.1/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk-amd64/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:os.version=3.18.11-tinycore64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,838] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,839] INFO Initiating client connection, connectString=172.17.0.10:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#4b704006 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2015-09-01 10:44:16,860] INFO Opening socket connection to server 172.17.0.10/172.17.0.10:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2015-09-01 10:44:16,865] INFO Socket connection established to 172.17.0.10/172.17.0.10:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2015-09-01 10:44:16,866 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /172.17.0.12:53850
zookeeper_1 | 2015-09-01 10:44:16,872 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /172.17.0.12:53850
zookeeper_1 | 2015-09-01 10:44:16,873 [myid:] - INFO [SyncThread:0:FileTxnLog#199] - Creating new log file: log.52
zookeeper_1 | 2015-09-01 10:44:16,883 [myid:] - INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x14f8881d8da0000 with negotiated timeout 6000 for client /172.17.0.12:53850
kafka_1 | [2015-09-01 10:44:16,886] INFO Session establishment complete on server 172.17.0.10/172.17.0.10:2181, sessionid = 0x14f8881d8da0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2015-09-01 10:44:16,887] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2015-09-01 10:44:17,030] INFO Log directory '/kafka/kafka-logs-32773' not found, creating it. (kafka.log.LogManager)
kafka_1 | [2015-09-01 10:44:17,037] INFO Loading logs. (kafka.log.LogManager)
kafka_1 | [2015-09-01 10:44:17,042] INFO Logs loading complete. (kafka.log.LogManager)
kafka_1 | [2015-09-01 10:44:17,043] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2015-09-01 10:44:17,047] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2015-09-01 10:44:17,074] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2015-09-01 10:44:17,075] INFO [Socket Server on Broker 32773], Started (kafka.network.SocketServer)
kafka_1 | [2015-09-01 10:44:17,129] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
kafka_1 | [2015-09-01 10:44:17,162] INFO 32773 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
zookeeper_1 | 2015-09-01 10:44:17,344 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x14f8881d8da0000 type:delete cxid:0x27 zxid:0x55 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2015-09-01 10:44:17,358] INFO New leader is 32773 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
kafka_1 | [2015-09-01 10:44:17,360] INFO Registered broker 32773 at path /brokers/ids/32773 with address 192.168.59.103:32773. (kafka.utils.ZkUtils$)
kafka_1 | [2015-09-01 10:44:17,371] INFO [Kafka Server 32773], started (kafka.server.KafkaServer)
zookeeper_1 | 2015-09-01 10:49:01,422 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.59.3:50220
zookeeper_1 | 2015-09-01 10:49:01,426 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#822] - Connection request from old client /192.168.59.3:50220; will be dropped if server is in r-o mode
zookeeper_1 | 2015-09-01 10:49:01,426 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.59.3:50220
zookeeper_1 | 2015-09-01 10:49:01,428 [myid:] - INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x14f8881d8da0001 with negotiated timeout 30000 for client /192.168.59.3:50220
zookeeper_1 | 2015-09-01 10:52:20,886 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /172.17.42.1:50639
zookeeper_1 | 2015-09-01 10:52:20,890 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#822] - Connection request from old client /172.17.42.1:50639; will be dropped if server is in r-o mode
zookeeper_1 | 2015-09-01 10:52:20,890 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /172.17.42.1:50639
zookeeper_1 | 2015-09-01 10:52:20,893 [myid:] - INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x14f8881d8da0002 with negotiated timeout 6000 for client /172.17.42.1:50639
This nonsense cost me more than it should have. For anyone else that comes across this - removing the image worked for me as well. Here's some copy pasta:
# Stop all containers
docker stop $(docker ps -a -q)
# Get the id of the container you want to remove, looks like: 8e8683455c7c
docker ps -a
# Remove it
docker rm 8e8683455c7c
# Get the id of the image you want to remove, looks like: f82f62b5876b
docker images
# Delete image
docker rmi f82f62b5876b
# Pull the slot machine lever one more time
docker-compose up
Also also - source: https://gist.github.com/JeffBelback/5687bb02f3618965ca8f
Related
I am using docker-compose.yml file to spin 3 instances of RabbitMQ in a single host. Running docker on mac, When I run docker-compose up, I see erlang cookies are not matching for the instances in the cluster. Let me know if you need any other information.
version: '3'
services:
rabbitmq1:
image: rabbitmq:3.8.34-management
hostname: rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
rabbitmq2:
image: rabbitmq:3.8.34-management
hostname: rabbitmq2
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- ./cluster-entrypoint.sh:/usr/local/bin/cluster-entrypoint.sh
entrypoint: /usr/local/bin/cluster-entrypoint.sh
rabbitmq3:
image: rabbitmq:3.8.34-management
hostname: rabbitmq3
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- ./cluster-entrypoint.sh:/usr/local/bin/cluster-entrypoint.sh
entrypoint: /usr/local/bin/cluster-entrypoint.sh
haproxy:
image: haproxy:1.7
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
depends_on:
- rabbitmq1
- rabbitmq2
- rabbitmq3
ports:
- 15672:15672
- 5672:5672
Below is my cluster-entrypoint.sh file
#!/bin/bash
set -e
# Start RMQ from entry point.
# This will ensure that environment variables passed
# will be honored
/usr/local/bin/docker-entrypoint.sh rabbitmq-server -detached
# Do the cluster dance
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit#rabbitmq1
# Stop the entire RMQ server. This is done so that we
# can attach to it again, but without the -detached flag
# making it run in the forground
rabbitmqctl stop
# Wait a while for the app to really stop
sleep 2s
# Start it
rabbitmq-server
Sorry for too many logs. I have used RabbitMQ:3.8.34 image, erlang cookies for all instances in cluster are different and RabbitMQ1 starts but other instances does not start.
Below is the log:
haproxy_1 | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -db -f /usr/local/etc/haproxy/haproxy.cfg -Ds
rabbitmq1_1 |
rabbitmq1_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq1_1 |
rabbitmq1_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq2_1 |
rabbitmq2_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq2_1 |
rabbitmq2_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq3_1 |
rabbitmq3_1 | warning: /var/lib/rabbitmq/.erlang.cookie contents do not match RABBITMQ_ERLANG_COOKIE
rabbitmq3_1 |
rabbitmq3_1 | WARNING: '/var/lib/rabbitmq/.erlang.cookie' was populated from '$RABBITMQ_ERLANG_COOKIE', which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq1_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq1_1 |
rabbitmq1_1 | Generated end result, for reference:
rabbitmq1_1 | ------------------------------------
rabbitmq1_1 | loopback_users.guest = false
rabbitmq1_1 | listeners.tcp.default = 5672
rabbitmq1_1 | default_pass = guest
rabbitmq1_1 | default_user = guest
rabbitmq1_1 | default_vhost = /
rabbitmq1_1 | management.tcp.port = 15672
rabbitmq1_1 | ------------------------------------
rabbitmq3_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq3_1 |
rabbitmq3_1 | Generated end result, for reference:
rabbitmq3_1 | ------------------------------------
rabbitmq3_1 | loopback_users.guest = false
rabbitmq3_1 | listeners.tcp.default = 5672
rabbitmq3_1 | default_pass = guest
rabbitmq3_1 | default_user = guest
rabbitmq3_1 | default_vhost = /
rabbitmq3_1 | management.tcp.port = 15672
rabbitmq3_1 | ------------------------------------
rabbitmq2_1 | WARNING: 'docker-entrypoint.sh' generated/modified the RabbitMQ configuration file, which will no longer happen in 3.9 and later! (https://github.com/docker-library/rabbitmq/pull/424)
rabbitmq2_1 |
rabbitmq2_1 | Generated end result, for reference:
rabbitmq2_1 | ------------------------------------
rabbitmq2_1 | loopback_users.guest = false
rabbitmq2_1 | listeners.tcp.default = 5672
rabbitmq2_1 | default_pass = guest
rabbitmq2_1 | default_user = guest
rabbitmq2_1 | default_vhost = /
rabbitmq2_1 | management.tcp.port = 15672
rabbitmq2_1 | ------------------------------------
rabbitmq3_1 | RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
rabbitmq3_1 | Stopping rabbit application on node rabbit#rabbitmq3 ...
rabbitmq3_1 | Error: unable to perform an operation on node 'rabbit#rabbitmq3'. Please see diagnostics information and suggestions below.
rabbitmq3_1 |
rabbitmq3_1 | Most common reasons for this are:
rabbitmq3_1 |
rabbitmq3_1 | * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
rabbitmq3_1 | * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
rabbitmq3_1 | * Target node is not running
rabbitmq3_1 |
rabbitmq3_1 | In addition to the diagnostics info below:
rabbitmq3_1 |
rabbitmq3_1 | * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
rabbitmq3_1 | * Consult server logs on node rabbit#rabbitmq3
rabbitmq3_1 | * If target node is configured to use long node names, don't forget to use --longnames with CLI tools
rabbitmq3_1 |
rabbitmq3_1 | DIAGNOSTICS
rabbitmq3_1 | ===========
rabbitmq3_1 |
rabbitmq3_1 | attempted to contact: [rabbit#rabbitmq3]
rabbitmq3_1 |
rabbitmq3_1 | rabbit#rabbitmq3:
rabbitmq3_1 | * connected to epmd (port 4369) on rabbitmq3
rabbitmq3_1 | * epmd reports: node 'rabbit' not running at all
rabbitmq3_1 | no other nodes on rabbitmq3
rabbitmq3_1 | * suggestion: start the node
rabbitmq3_1 |
rabbitmq3_1 | Current node details:
rabbitmq3_1 | * node name: 'rabbitmqcli-797-rabbit#rabbitmq3'
rabbitmq3_1 | * effective user's home directory: /var/lib/rabbitmq
rabbitmq3_1 | * Erlang cookie hash: gnzLDuqKcGxMNKFokfhOew==
rabbitmq3_1 |
docker-rabbitmq-cluster_rabbitmq3_1 exited with code 69
rabbitmq2_1 | RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
rabbitmq2_1 | Stopping rabbit application on node rabbit#rabbitmq2 ...
rabbitmq2_1 | Error: unable to perform an operation on node 'rabbit#rabbitmq2'. Please see diagnostics information and suggestions below.
rabbitmq2_1 |
rabbitmq2_1 | Most common reasons for this are:
rabbitmq2_1 |
rabbitmq2_1 | * Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
rabbitmq2_1 | * CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
rabbitmq2_1 | * Target node is not running
rabbitmq2_1 |
rabbitmq2_1 | In addition to the diagnostics info below:
rabbitmq2_1 |
rabbitmq2_1 | * See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
rabbitmq2_1 | * Consult server logs on node rabbit#rabbitmq2
rabbitmq2_1 | * If target node is configured to use long node names, don't forget to use --longnames with CLI tools
rabbitmq2_1 |
rabbitmq2_1 | DIAGNOSTICS
rabbitmq2_1 | ===========
rabbitmq2_1 |
rabbitmq2_1 | attempted to contact: [rabbit#rabbitmq2]
rabbitmq2_1 |
rabbitmq2_1 | rabbit#rabbitmq2:
rabbitmq2_1 | * connected to epmd (port 4369) on rabbitmq2
rabbitmq2_1 | * epmd reports: node 'rabbit' not running at all
rabbitmq2_1 | no other nodes on rabbitmq2
rabbitmq2_1 | * suggestion: start the node
rabbitmq2_1 |
rabbitmq2_1 | Current node details:
rabbitmq2_1 | * node name: 'rabbitmqcli-568-rabbit#rabbitmq2'
rabbitmq2_1 | * effective user's home directory: /var/lib/rabbitmq
rabbitmq2_1 | * Erlang cookie hash: gnzLDuqKcGxMNKFokfhOew==
rabbitmq2_1 |
docker-rabbitmq-cluster_rabbitmq2_1 exited with code 69
rabbitmq1_1 | Configuring logger redirection
rabbitmq1_1 | 2022-07-05 02:43:40.659 [debug] <0.288.0> Lager installed handler error_logger_lager_h into error_logger
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.291.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.312.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.303.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.309.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.294.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.297.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.306.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.670 [debug] <0.300.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.671 [debug] <0.315.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.672 [debug] <0.318.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.673 [debug] <0.321.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.675 [debug] <0.324.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.676 [debug] <0.327.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event
rabbitmq1_1 | 2022-07-05 02:43:40.691 [info] <0.44.0> Application lager started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:41.159 [debug] <0.284.0> Lager installed handler lager_backend_throttle into lager_event
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:32] : 'server rabbitmq2' : could not resolve address 'rabbitmq2'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:33] : 'server rabbitmq3' : could not resolve address 'rabbitmq3'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:43] : 'server rabbitmq2' : could not resolve address 'rabbitmq2'.
haproxy_1 | [ALERT] 185/024336 (8) : parsing [/usr/local/etc/haproxy/haproxy.cfg:44] : 'server rabbitmq3' : could not resolve address 'rabbitmq3'.
haproxy_1 | [ALERT] 185/024336 (8) : Failed to initialize server(s) addr.
haproxy_1 | <5>haproxy-systemd-wrapper: exit, haproxy RC=1
docker-rabbitmq-cluster_haproxy_1 exited with code 1
rabbitmq1_1 | 2022-07-05 02:43:43.065 [info] <0.44.0> Application mnesia started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:43.066 [info] <0.273.0>
rabbitmq1_1 | Starting RabbitMQ 3.8.34 on Erlang 24.3.4.1 [emu]
rabbitmq1_1 | Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
rabbitmq1_1 | Licensed under the MPL 2.0. Website: https://rabbitmq.com
rabbitmq1_1 |
rabbitmq1_1 | ## ## RabbitMQ 3.8.34
rabbitmq1_1 | ## ##
rabbitmq1_1 | ########## Copyright (c) 2007-2022 VMware, Inc. or its affiliates.
rabbitmq1_1 | ###### ##
rabbitmq1_1 | ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
rabbitmq1_1 |
rabbitmq1_1 | Erlang: 24.3.4.1 [emu]
rabbitmq1_1 | TLS Library: OpenSSL - OpenSSL 1.1.1o 3 May 2022
rabbitmq1_1 |
rabbitmq1_1 | Doc guides: https://rabbitmq.com/documentation.html
rabbitmq1_1 | Support: https://rabbitmq.com/contact.html
rabbitmq1_1 | Tutorials: https://rabbitmq.com/getstarted.html
rabbitmq1_1 | Monitoring: https://rabbitmq.com/monitoring.html
rabbitmq1_1 |
rabbitmq1_1 | Logs: <stdout>
rabbitmq1_1 |
rabbitmq1_1 | Config file(s): /etc/rabbitmq/rabbitmq.conf
rabbitmq1_1 |
rabbitmq1_1 | Starting broker...2022-07-05 02:43:43.068 [info] <0.273.0>
rabbitmq1_1 | node : rabbit#rabbitmq1
rabbitmq1_1 | home dir : /var/lib/rabbitmq
rabbitmq1_1 | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq1_1 | cookie hash : VlfoFK5J8f9Ln3G9sXDoPQ==
rabbitmq1_1 | log(s) : <stdout>
rabbitmq1_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.265 [info] <0.44.0> Application amqp_client started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.584.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.612.0> Statistics database started.
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.611.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq1_1 | 2022-07-05 02:43:44.279 [info] <0.44.0> Application rabbitmq_management started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.292 [info] <0.44.0> Application prometheus started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.625.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.525.0> Ready to start client connection listeners
rabbitmq1_1 | 2022-07-05 02:43:44.294 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#rabbitmq1
rabbitmq1_1 | 2022-07-05 02:43:44.297 [info] <0.669.0> started TCP listener on [::]:5672
rabbitmq1_1 | 2022-07-05 02:43:45.321 [info] <0.525.0> Server startup complete; 4 plugins started.
rabbitmq1_1 | * rabbitmq_prometheus
rabbitmq1_1 | * rabbitmq_management
rabbitmq1_1 | * rabbitmq_web_dispatch
rabbitmq1_1 | * rabbitmq_management_agent
rabbitmq1_1 | completed with 4 plugins.
rabbitmq1_1 | 2022-07-05 02:43:45.322 [info] <0.525.0> Resetting node maintenance status
I am not sure what I am missing. Sorry for too much of logs.
After analysing your files and log files I can observe few issues.
Your setup will be able to start only once while there no files/configuration and you are doing it from the scratch. The reason of such behaviour is that RabbitMQ stores configuration to internal DB called Mnesia and it is mandatory that once nodes added it should be present for further start. If you won't follow it you will observer errors that the node waiting for mnesia to find its nodes.
Another issue with repetitive start is that node that was added (2 or 3) to a cluster marks itself as a cluster member, you will see an error that main node (1) expects to connect to earlier connected but your entry-point already reset 2 and 3 nodes.
You cannot combine versions of RabbitMQ cause it might contain different data structures that will not allow nodes to sync and you will have an error like "schema_integrity_check_failed...", it should be all identical.
When I was using RabbitMQ, my configuration ensures that there is persistent location (disk) with all the data and that repetitive start will use already initialised data. Also it is a good practice to use cluster management software such as etc or consul that is supported by RabbitMQ and you don't need to handle it yourself.
Hope that would help.
Generally I was able to successfully start your setup on my machine (macOS) with docker, the script is the following:
ensure you never started the compose in order to avoid the data from previous runs
prepare everything in the folder
do docker-compose up, enjoy everything works
use docker-compose down for cleanup the stack not '... stop' command cause the data will stay
Below you can find a log for the start, please ignore ha_proxy error, there is no config file attached so it must fail
% docker-compose up
Creating network "rabbitmq_default" with the default driver
Creating rabbitmq_rabbitmq1_1 ... done
Creating rabbitmq_rabbitmq3_1 ... done
Creating rabbitmq_rabbitmq2_1 ... done
Creating rabbitmq_haproxy_1 ... done
Attaching to rabbitmq_rabbitmq1_1, rabbitmq_rabbitmq3_1, rabbitmq_rabbitmq2_1, rabbitmq_haproxy_1
(due to limitation I've put it here)
My setup looks like this, the only change is that rabbitMQ data is persisted and init is done manually, you also can mount data folder from file system path
version: '3'
services:
rabbitmq1:
image: rabbitmq:3.8.34-management
hostname: rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-01-data:/var/lib/rabbitmq
rabbitmq2:
image: rabbitmq:3.8.34-management
hostname: rabbitmq2
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-02-data:/var/lib/rabbitmq
rabbitmq3:
image: rabbitmq:3.8.34-management
hostname: rabbitmq3
depends_on:
- rabbitmq1
environment:
- RABBITMQ_ERLANG_COOKIE=12345
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
volumes:
- rabbitmq-03-data:/var/lib/rabbitmq
volumes:
rabbitmq-01-data:
rabbitmq-02-data:
rabbitmq-03-data:
Manual run on every "follower" node
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit#rabbitmq1
rabbitmqctl start_app
Don't forget to enable redundant queues if applicable for your case
rabbitmqctl set_policy ha "." '{"ha-mode":"all"}'
I have a complete RabbitMQ setup that forms a cluster using docker-compose here:
https://github.com/lukebakken/docker-rabbitmq-cluster
Please note that the following mirroring policy is NOT recommended. There is no need to mirror queues to all nodes -
rabbitmqctl set_policy ha "." '{"ha-mode":"all"}'
You should mirror to 2 nodes in your cluster.
BETTER YET, use the latest version of RabbitMQ and use Quorum Queues! Classic HA mirroring will be removed from RabbitMQ in version 4.0
https://www.rabbitmq.com/quorum-queues.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I am using windows docker
my docker-compose file is as shown below:
version: '3.5'
services:
postgres:
container_name: postgres_container
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-root}
PGDATA: /data/postgres
volumes:
- ./postgres-data:/var/
ports:
- "5432:5432"
restart: unless-stopped
When i build it i get following error log and container exits
Attaching to postgres_container
postgres_container | The files belonging to this database system will be owned by user "postgres".
postgres_container | This user must also own the server process.
postgres_container |
postgres_container | The database cluster will be initialized with locale "en_US.utf8".
postgres_container | The default database encoding has accordingly been set to "UTF8".
postgres_container | The default text search configuration will be set to "english".
postgres_container |
postgres_container | Data page checksums are disabled.
postgres_container |
postgres_container | fixing permissions on existing directory /data/postgres ... ok
postgres_container | creating subdirectories ... ok
postgres_container | selecting dynamic shared memory implementation ... posix
postgres_container | selecting default max_connections ... 100
postgres_container | selecting default shared_buffers ... 128MB
postgres_container | selecting default time zone ... Etc/UTC
postgres_container | creating configuration files ... ok
postgres_container | running bootstrap script ... ok
postgres_container | performing post-bootstrap initialization ... ok
postgres_container | syncing data to disk ... ok
postgres_container |
postgres_container |
postgres_container | Success. You can now start the database server using:
postgres_container |
postgres_container | pg_ctl -D /data/postgres -l logfile start
postgres_container |
postgres_container | initdb: warning: enabling "trust" authentication for local connections
postgres_container | You can change this by editing pg_hba.conf or using the option -A, or
postgres_container | --auth-local and --auth-host, the next time you run initdb.
postgres_container | waiting for server to start....2020-04-17 13:18:31.599 UTC [47] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_container | 2020-04-17 13:18:31.607 UTC [47] LOG: could not bind Unix address "/var/run/postgresql/.s.PGSQL.5432": Input/output error
postgres_container | 2020-04-17 13:18:31.607 UTC [47] HINT: Is another postmaster already running on port 5432? If not, remove socket file "/var/run/postgresql/.s.PGSQL.5432" and retry.
postgres_container | 2020-04-17 13:18:31.607 UTC [47] WARNING: could not create Unix-domain socket in directory "/var/run/postgresql"
postgres_container | 2020-04-17 13:18:31.607 UTC [47] FATAL: could not create any Unix-domain sockets
postgres_container | 2020-04-17 13:18:31.610 UTC [47] LOG: database system is shut down
postgres_container | stopped waiting
postgres_container | pg_ctl: could not start server
postgres_container | Examine the log output.
postgres_container |
postgres_container | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_container |
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_container | 2020-04-17 13:18:32.246 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_container | 2020-04-17 13:18:32.255 UTC [1] LOG: could not bind Unix address "/var/run/postgresql/.s.PGSQL.5432": Input/output error
postgres_container | 2020-04-17 13:18:32.255 UTC [1] HINT: Is another postmaster already running on port 5432? If not, remove socket file "/var/run/postgresql/.s.PGSQL.5432" and retry.
postgres_container | 2020-04-17 13:18:32.255 UTC [1] WARNING: could not create Unix-domain socket in directory "/var/run/postgresql"
postgres_container | 2020-04-17 13:18:32.255 UTC [1] FATAL: could not create any Unix-domain sockets
postgres_container | 2020-04-17 13:18:32.259 UTC [1] LOG: database system is shut down
postgres_container exited with code 1
I checked 5432 port its open and no process is using it.
when i remove volume from my docker-compose.yml file it works
perfectly
the volume i am using ./postgres-data is the local directory on my system i want to map it to the PostgreSQL container to restore database.
You are using docker on Windows and mounting the directory where the socket will be created (/var) as volume but windows filesystem doesn't support unix sockets.
Change the configuration in order to:
leave the unix socket (/var/run/postgresql/...) inside the docker without mounting as volume
mount data directory as volume
I'm using a rasbery pi with Rasbian. I want to use Kafka to stream data from a camera to my phone. I downloaded this package from the Kafka website that contains Zookeeper and Kafka:
https://www.apache.org/dyn/closer.cgi?path=/kafka/2.4.1/kafka_2.12-2.4.1.tgz
First I started zookeeper with the zookeeper-server-start.sh located in the bin directory with:
"sudo bin/zookeeper-server-start.sh config/zookeeper.properties" in the terminal. I got back:
[2020-04-07 17:56:44,843] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,854] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,936] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,937] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,975] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,976] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,978] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,979] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-04-07 17:56:45,010] INFO Log4j found with jmx enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2020-04-07 17:56:45,263] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,264] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,268] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,269] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,271] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-04-07 17:56:45,319] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-04-07 17:56:45,483] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,484] INFO Server environment:host.name=Rupert (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,485] INFO Server environment:java.version=11.0.6 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,486] INFO Server environment:java.vendor=Raspbian (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,487] INFO Server environment:java.home=/usr/lib/jvm/java-11-openjdk-armhf (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,488] INFO Server environment:java.class.path=/usr/local/kafka/bin/../libs/activation-1.1.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/kafka/bin/../libs/commons-cli-1.4.jar:/usr/local/kafka/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/kafka/bin/../libs/connect-api-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-basic-auth-extension-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-file-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-json-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-client-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-runtime-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-transforms-2.4.1.jar:/usr/local/kafka/bin/../libs/guava-20.0.jar:/usr/local/kafka/bin/../libs/hk2-api-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-core-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/kafka/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.28.jar:/usr/local/kafka/bin/../libs/jersey-common-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/kafka/bin/../libs/jersey-hk2-2.28.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/kafka/bin/../libs/jersey-server-2.28.jar:/usr/local/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1-sources.jar:/usr/local/kafka/bin/../libs/kafka-clients-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-examples-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-scala_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-test-utils-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-2.4.1.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/lz4-java-1.6.0.jar:/usr/local/kafka/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/paranamer-2.8.jar:/usr/local/kafka/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/kafka/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/kafka/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/kafka/bin/../libs/scala-library-2.12.10.jar:/usr/local/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/kafka/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/kafka/bin/../libs/zookeeper-3.5.7.jar:/usr/local/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/usr/local/kafka/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,498] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/arm-linux-gnueabihf/jni:/lib/arm-linux-gnueabihf:/usr/lib/arm-linux-gnueabihf:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,499] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,500] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,501] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,502] INFO Server environment:os.arch=arm (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,503] INFO Server environment:os.version=4.19.66-v7+ (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,504] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,506] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,507] INFO Server environment:user.dir=/usr/local/kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,508] INFO Server environment:os.memory.free=493MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,509] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,510] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,529] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,531] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,539] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,661] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2020-04-07 17:56:45,699] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-07 17:56:45,773] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-07 17:56:45,965] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
[2020-04-07 17:56:45,994] INFO Reading snapshot /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap)
[2020-04-07 17:56:46,076] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-04-07 17:56:46,377] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
I believe that his is correct, but please do bring anything to my attention.
Next I tried to run Kafka using "sudo bin/kafka-server-start.sh config/server.properties" and after running for 10 seconds it returned:
[2020-04-07 17:49:40,577] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-07 17:49:47,215] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-04-07 17:49:47,222] INFO starting (kafka.server.KafkaServer)
[2020-04-07 17:49:47,233] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-04-07 17:49:47,558] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:47,679] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,681] INFO Client environment:host.name=Rupert (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,682] INFO Client environment:java.version=11.0.6 (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,682] INFO Client environment:java.vendor=Raspbian (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,683] INFO Client environment:java.home=/usr/lib/jvm/java-11-openjdk-armhf (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,684] INFO Client environment:java.class.path=/usr/local/kafka/bin/../libs/activation-1.1.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/kafka/bin/../libs/commons-cli-1.4.jar:/usr/local/kafka/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/kafka/bin/../libs/connect-api-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-basic-auth-extension-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-file-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-json-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-client-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-runtime-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-transforms-2.4.1.jar:/usr/local/kafka/bin/../libs/guava-20.0.jar:/usr/local/kafka/bin/../libs/hk2-api-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-core-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/kafka/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.28.jar:/usr/local/kafka/bin/../libs/jersey-common-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/kafka/bin/../libs/jersey-hk2-2.28.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/kafka/bin/../libs/jersey-server-2.28.jar:/usr/local/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1-sources.jar:/usr/local/kafka/bin/../libs/kafka-clients-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-examples-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-scala_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-test-utils-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-2.4.1.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/lz4-java-1.6.0.jar:/usr/local/kafka/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/paranamer-2.8.jar:/usr/local/kafka/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/kafka/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/kafka/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/kafka/bin/../libs/scala-library-2.12.10.jar:/usr/local/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/kafka/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/kafka/bin/../libs/zookeeper-3.5.7.jar:/usr/local/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/usr/local/kafka/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,693] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/arm-linux-gnueabihf/jni:/lib/arm-linux-gnueabihf:/usr/lib/arm-linux-gnueabihf:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,695] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,696] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,697] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,698] INFO Client environment:os.arch=arm (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,699] INFO Client environment:os.version=4.19.66-v7+ (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,700] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,701] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,702] INFO Client environment:user.dir=/usr/local/kafka (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,703] INFO Client environment:os.memory.free=975MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,704] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,704] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,769] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#114918a (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,887] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-04-07 17:49:48,017] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-04-07 17:49:48,130] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:48,313] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:48,449] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:48,557] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:49,678] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:49,682] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:50,784] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:50,787] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:51,890] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:51,894] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:52,997] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:53,000] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,114] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,118] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,353] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:55,221] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:55,375] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:55,396] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:55,412] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:55,449] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:113)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:375)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:399)
at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-04-07 17:49:55,480] INFO shutting down (kafka.server.KafkaServer)
[2020-04-07 17:49:55,577] INFO shut down completed (kafka.server.KafkaServer)
[2020-04-07 17:49:55,583] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-04-07 17:49:55,612] INFO shutting down (kafka.server.KafkaServer)
My assumption is that Kafka can't reach zookeeper but I really have no clue.
Is there a way I can test if zookeeper is working?
Thanks to anyone who helps, stay healthy.
When I pressed ctrl+shift+c to get rid of the text that showed up afterwards it also undid what needed to happen.
I have a docker file and docker compose fiel in my project directory. I am running the docker compose file with the following command
docker-compose up
It builds and runs the different images for the server and database, but I am getting an error that is saying my package.json file is not in the correct directory. I am not sure where it is going wrong.
Here is my docker file
FROM node:10
WORKDIR /app
COPY package.json ./app
RUN npm install
COPY . /app
CMD npm start
EXPOSE 5585
this is my docker compose file
web:
image: node
command: npm start
ports:
- "5585:5588"
links:
- db
working_dir: /app
environment:
SEQ_DB: addidas
SEQ_USER: sdfsdf
SEQ_PW: sdfsdfs
PORT: 4242
DATABASE_URL: postgres://sdfsdf:sdfsdfs#localhost:5432/addidas
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: sdfsdf
POSTGRES_PASSWORD: sdfsdfs
the error that i am getting in my terminal is the following :
Attaching to addidas_db_1, addidas_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | waiting for server to start....2018-11-06 17:38:51.968 UTC [43] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:38:51.983 UTC [44] LOG: database system was shut down at 2018-11-06 17:38:51 UTC
db_1 | 2018-11-06 17:38:51.987 UTC [43] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | waiting for server to shut down...2018-11-06 17:38:52.438 UTC [43] LOG: received fast shutdown request
db_1 | .2018-11-06 17:38:52.441 UTC [43] LOG: aborting any active transactions
db_1 | 2018-11-06 17:38:52.443 UTC [43] LOG: background worker "logical replication launcher" (PID 50) exited with exit code 1
db_1 | 2018-11-06 17:38:52.444 UTC [45] LOG: shutting down
db_1 | 2018-11-06 17:38:52.459 UTC [43] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2018-11-06 17:38:52.556 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-11-06 17:38:52.556 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-11-06 17:38:52.560 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:38:52.575 UTC [61] LOG: database system was shut down at 2018-11-06 17:38:52 UTC
db_1 | 2018-11-06 17:38:52.580 UTC [1] LOG: database system is ready to accept connections
db_1 | 2018-11-06 17:46:15.922 UTC [1] LOG: received smart shutdown request
db_1 | 2018-11-06 17:46:15.926 UTC [1] LOG: background worker "logical replication launcher" (PID 67) exited with exit code 1
db_1 | 2018-11-06 17:46:15.928 UTC [62] LOG: shutting down
db_1 | 2018-11-06 17:46:15.944 UTC [1] LOG: database system is shut down
db_1 | 2018-11-06 17:46:19.284 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-11-06 17:46:19.284 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-11-06 17:46:19.288 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-11-06 17:46:19.309 UTC [24] LOG: database system was shut down at 2018-11-06 17:46:15 UTC
db_1 | 2018-11-06 17:46:19.316 UTC [1] LOG: database system is ready to accept connections
web_1 | npm ERR! path /app/package.json
web_1 | npm ERR! code ENOENT
web_1 | npm ERR! errno -2
web_1 | npm ERR! syscall open
web_1 | npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'
web_1 | npm ERR! enoent This is related to npm not being able to find a file.
web_1 | npm ERR! enoent
web_1 |
web_1 | npm ERR! A complete log of this run can be found in:
web_1 | npm ERR! /root/.npm/_logs/2018-11-06T17_47_14_825Z-debug.log
addidas_web_1 exited with code 254
You are not using your docker image your docker-compose.yml.
You should point to your Dockerfile:
web:
build: ./path/to/Dockerfile
There is also some mistakes with your configuration. You should share the containers (your web server and the database) on the same network to be able to access the database from the web server.
networks:
mynetwork:
driver: bridge
web:
build: ./path/to/Dockerfile
networks:
- mynetwork
links:
- db
environment:
SEQ_DB: addidas
SEQ_USER: sdfsdf
SEQ_PW: sdfsdfs
PORT: 4242
DATABASE_URL: postgres://sdfsdf:sdfsdfs#db:5432/addidas
db:
image: postgres
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: sdfsdf
POSTGRES_PASSWORD: sdfsdfs
Hi I'm trying to run a Kafka server on a RedHat server:
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 6.6 (Santiago)
Release: 6.6
Codename: Santiago
The server has java installed:
java version "1.7.0_79"
OpenJDK Runtime Environment (suse-2.5.5.3.el6_6-x86_64 u79-b14)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
I start zookeeper without any problem but when trying to start kafka bin/kafka-server-start.sh config/server.properties It prompts an error:
[2015-08-27 11:03:01,542] INFO Verifying properties (kafka.utils.VerifiablePrope rties)
[2015-08-27 11:03:01,585] INFO Property broker.id is overridden to 0 (kafka.util s.VerifiableProperties)
[2015-08-27 11:03:01,585] INFO Property log.cleaner.enable is overridden to fals e (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,585] INFO Property log.dirs is overridden to /tmp/kafka-log s (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,586] INFO Property log.retention.check.interval.ms is overr idden to 300000 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,586] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,586] INFO Property log.segment.bytes is overridden to 10737 41824 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,586] INFO Property num.io.threads is overridden to 8 (kafka .utils.VerifiableProperties)
[2015-08-27 11:03:01,586] INFO Property num.network.threads is overridden to 3 ( kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,587] INFO Property num.partitions is overridden to 1 (kafka .utils.VerifiableProperties)
[2015-08-27 11:03:01,587] INFO Property num.recovery.threads.per.data.dir is ove rridden to 1 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,587] INFO Property port is overridden to 9092 (kafka.utils. VerifiableProperties)
[2015-08-27 11:03:01,587] INFO Property socket.receive.buffer.bytes is overridde n to 102400 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,587] INFO Property socket.request.max.bytes is overridden t o 104857600 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,588] INFO Property socket.send.buffer.bytes is overridden t o 102400 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,588] INFO Property zookeeper.connect is overridden to local host:2181 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,588] INFO Property zookeeper.connection.timeout.ms is overr idden to 6000 (kafka.utils.VerifiableProperties)
[2015-08-27 11:03:01,656] INFO [Kafka Server 0], starting (kafka.server.KafkaSer ver)
[2015-08-27 11:03:01,658] INFO [Kafka Server 0], Connecting to zookeeper on loca lhost:2181 (kafka.server.KafkaServer)
[2015-08-27 11:03:01,668] INFO Starting ZkClient event thread. (org.I0Itec.zkcli ent.ZkEventThread)
[2015-08-27 11:03:01,678] INFO Client environment:zookeeper.version=3.4.6-156996 5, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,678] INFO Client environment:host.name=<NA> (org.apache.zoo keeper.ZooKeeper)
[2015-08-27 11:03:01,678] INFO Client environment:java.version=1.7.0_79 (org.apa che.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,678] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,678] INFO Client environment:java.home=/usr/lib/jvm/java-1. 7.0-openjdk-1.7.0.79.x86_64/jre (org.apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:java.class.path=:/opt/kafka_2. 9.1-0.8.2.1/bin/../core/build/dependant-libs-2.10.4*/*.jar:/opt/kafka_2.9.1-0.8. 2.1/bin/../examples/build/libs//kafka-examples*.jar:/opt/kafka_2.9.1-0.8.2.1/bin /../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/opt/kafka_2. 9.1-0.8.2.1/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.ja r:/opt/kafka_2.9.1-0.8.2.1/bin/../clients/build/libs/kafka-clients*.jar:/opt/kaf ka_2.9.1-0.8.2.1/bin/../libs/jopt-simple-3.2.jar:/opt/kafka_2.9.1-0.8.2.1/bin/.. /libs/kafka_2.9.1-0.8.2.1.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/kafka_2.9.1-0 .8.2.1-javadoc.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/kafka_2.9.1-0.8.2.1-scal adoc.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/kafka_2.9.1-0.8.2.1-sources.jar:/o pt/kafka_2.9.1-0.8.2.1/bin/../libs/kafka_2.9.1-0.8.2.1-test.jar:/opt/kafka_2.9.1 -0.8.2.1/bin/../libs/kafka-clients-0.8.2.1.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../l ibs/log4j-1.2.16.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/lz4-1.2.0.jar:/opt/kaf ka_2.9.1-0.8.2.1/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka_2.9.1-0.8.2.1/bin /../libs/scala-library-2.9.1.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/slf4j-api- 1.7.6.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/slf4j-log4j12-1.6.1.jar:/opt/kafk a_2.9.1-0.8.2.1/bin/../libs/snappy-java-1.1.1.6.jar:/opt/kafka_2.9.1-0.8.2.1/bin /../libs/zkclient-0.3.jar:/opt/kafka_2.9.1-0.8.2.1/bin/../libs/zookeeper-3.4.6.j ar:/opt/kafka_2.9.1-0.8.2.1/bin/../core/build/libs/kafka_2.10*.jar (org.apache.z ookeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:java.library.path=/usr/java/pa ckages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper )
[2015-08-27 11:03:01,679] INFO Client environment:java.io.tmpdir=/tmp (org.apach e.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:java.compiler=<NA> (org.apache .zookeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:os.name=Linux (org.apache.zook eeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:os.arch=amd64 (org.apache.zook eeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:os.version=2.6.32-504.23.4.el6 .x86_64 (org.apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:user.name=root (org.apache.zoo keeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:user.home=/root (org.apache.zo okeeper.ZooKeeper)
[2015-08-27 11:03:01,679] INFO Client environment:user.dir=/opt/kafka_2.9.1-0.8. 2.1 (org.apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,680] INFO Initiating client connection, connectString=local host:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#1d88518f (org .apache.zookeeper.ZooKeeper)
[2015-08-27 11:03:01,700] INFO Opening socket connection to server localhost/127 .0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.ap ache.zookeeper.ClientCnxn)
[2015-08-27 11:03:01,705] INFO Socket connection established to localhost/127.0. 0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2015-08-27 11:03:01,728] INFO Session establishment complete on server localhos t/127.0.0.1:2181, sessionid = 0x14f6e6478700000, negotiated timeout = 6000 (org. apache.zookeeper.ClientCnxn)
[2015-08-27 11:03:01,730] INFO zookeeper state changed (SyncConnected) (org.I0It ec.zkclient.ZkClient)
[2015-08-27 11:03:01,828] INFO Log directory '/tmp/kafka-logs' not found, creati ng it. (kafka.log.LogManager)
[2015-08-27 11:03:01,839] INFO Loading logs. (kafka.log.LogManager)
[2015-08-27 11:03:01,848] INFO Logs loading complete. (kafka.log.LogManager)
[2015-08-27 11:03:01,849] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2015-08-27 11:03:01,853] INFO Starting log flusher with a default period of 922 3372036854775807 ms. (kafka.log.LogManager)
[2015-08-27 11:03:01,886] INFO Awaiting socket connections on 0.0.0.0:9092. (kaf ka.network.Acceptor)
[2015-08-27 11:03:01,887] INFO [Socket Server on Broker 0], Started (kafka.netwo rk.SocketServer)
[2015-08-27 11:03:01,964] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2015-08-27 11:03:02,007] INFO 0 successfully elected as leader (kafka.server.Zo okeeperLeaderElector)
[2015-08-27 11:03:02,081] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.net.UnknownHostException: SRV101004013: SRV101004013
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:54)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
at kafka.server.KafkaServer.startup(KafkaServer.scala:124)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala: 29)
at kafka.Kafka$.main(Kafka.scala:46)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: SRV101004013
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:129 3)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 6 more
[2015-08-27 11:03:02,084] INFO [Kafka Server 0], shutting down (kafka.server.Kaf kaServer)
[2015-08-27 11:03:02,086] INFO [Socket Server on Broker 0], Shutting down (kafka .network.SocketServer)
[2015-08-27 11:03:02,091] INFO [Socket Server on Broker 0], Shutdown completed ( kafka.network.SocketServer)
[2015-08-27 11:03:02,092] INFO [Kafka Request Handler on Broker 0], shutting dow n (kafka.server.KafkaRequestHandlerPool)
[2015-08-27 11:03:02,128] INFO [Kafka Request Handler on Broker 0], shut down co mpletely (kafka.server.KafkaRequestHandlerPool)
[2015-08-27 11:03:02,164] INFO New leader is 0 (kafka.server.ZookeeperLeaderElec tor$LeaderChangeListener)
[2015-08-27 11:03:02,346] INFO [Replica Manager on Broker 0]: Shut down (kafka.s erver.ReplicaManager)
[2015-08-27 11:03:02,347] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2015-08-27 11:03:02,348] INFO [ReplicaFetcherManager on broker 0] shutdown comp leted (kafka.server.ReplicaFetcherManager)
[2015-08-27 11:03:02,352] INFO [Replica Manager on Broker 0]: Shut down complete ly (kafka.server.ReplicaManager)
[2015-08-27 11:03:02,352] INFO Shutting down. (kafka.log.LogManager)
[2015-08-27 11:03:02,362] INFO Shutdown complete. (kafka.log.LogManager)
[2015-08-27 11:03:02,366] INFO Terminate ZkClient event thread. (org.I0Itec.zkcl ient.ZkEventThread)
[2015-08-27 11:03:02,368] INFO Session: 0x14f6e6478700000 closed (org.apache.zoo keeper.ZooKeeper)
[2015-08-27 11:03:02,368] INFO EventThread shut down (org.apache.zookeeper.Clien tCnxn)
[2015-08-27 11:03:02,368] INFO [Kafka Server 0], shut down completed (kafka.serv er.KafkaServer)
[2015-08-27 11:03:02,369] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.net.UnknownHostException: SRV101004013: SRV101004013
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:54)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
at kafka.server.KafkaServer.startup(KafkaServer.scala:124)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala: 29)
at kafka.Kafka$.main(Kafka.scala:46)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: SRV101004013
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:129 3)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 6 more
[2015-08-27 11:03:02,372] INFO [Kafka Server 0], shutting down (kafka.server.Kaf kaServer)
I guess the problem has to be with this opened issue but I can't be sure
Does anyone found the same problem? Does anyone know how to fix it?
Well it cannot resolve the host SRV101004013. The one question is did you write this host to some Kafka or ZK config? Can you resolve the hostname using nslookup or simple ping? If yes do you have the host in /etc/hosts? Because InetAddress.getLocalHost() ignores the /etc/resolv.conf but only looks at the /etc/hosts file.