I have a secure kafka cluster (SSL with certificate) in production and i want to modify some logger level on-the-fly without restarting the cluster (even with a rolling update)
In the official doc it state you can modify broker configuration dynamically.
So, i tried this command
/bin/kafka-configs --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 1
only to obtain this error
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.
If i try with port 9093 i get a java.util.concurrent.TimeoutException
kafka-configs is the right command to use.
You need to tell the command "who you are" / "log in".
It's achieved with the --command-config option.
There is an official example here
kafka-configs --command-config /etc/kafka/client.properties --bootstrap-server [hostname]:9093 --describe --entity-type broker-loggers --entity-name 1
Once you can use describe, then you can alter like
kafka-configs --command-config /etc/kafka/client.properties --bootstrap-server [hostname]:9093 --alter --add-config "kafka.authorizer.logger=INFO" --entity-type broker-loggers --entity-name 1
Which result in
Completed updating config for broker-logger 1.
Related
Regarding Kafka-Zookeeper Security using DIGEST MD5 Authentication, I am trying to rotate/change credentials/password for both server(zookeeper) and client(kafka) jaas config file.
We have a 3 node cluster of 3 zookeepers and 3 kafka broker nodes with below jaas configuration file.
kafka.conf
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="password";
};
zookeeper.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="password";
};
To rotate we do a rolling restart of server(zookeeper) instances after updating the credential(password) and during the process of rolling restart after updating the same credential/password for super user for client(kafka instances) one at a time, we notice
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
these info level in server logs, which eventually results in unclean shutdown and restart of the broker which impacts the writes and reads for longer than expected. I have tried commenting requireClientAuthScheme=sasl in zookeeper zoo.cfg https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication to allow any clients authenticate to zookeeper but no success.
Also, alternative approach - tried to update the credential/password in jaas config file dynamically using sasl.jaas.config and do get the same exception documented in this jira (reference: https://issues.apache.org/jira/browse/KAFKA-8010).
can someone have any suggestions? Thanks in advance.
I'm facing an issue related to Kafka.
I'm having my current service (Producer) that sends the message to a Kafka topic (events). The service is using kafka_2.12 v1.0.0, written in Java.
I'm trying to integrate it with the sample project of spark-streaming as a Consumer service (here using kafka_2.11 v0.10.0, written in Scala)
The message is sent successfully from Producer to the Kafka topic. However, I always receive the error stack below:
Exception in thread "main" org.apache.kafka.common.errors.InconsistentGroupProtocolException: The group member's supported protocols are incompatible with those of existing members.
at ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()
at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:577)
at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:571) at com.jj.streaming.ItemApp$.delayedEndpoint$com$jj$streaming$ItemApp$1(ItemApp.scala:72)
at com.jj.streaming.ItemApp$delayedInit$body.apply(ItemApp.scala:12)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) at scala.App$class.main(App.scala:76)
at com.jj.streaming.ItemApp$.main(ItemApp.scala:12)
at com.jj.streaming.ItemApp.main(ItemApp.scala)
I don't know the root cause. How can I fix this?
This happens in my configuration when I attempt to add a consumer to the cluster that is using a different partition assignment strategy from the previous ones.
For example:
partition.assignment.strategy=org.apache.kafka.clients.consumer.RandomAccessAssignor
mixed with or defaulted to:
partition.assignment.strategy=org.apache.kafka.clients.consumer.RangeAssignor
As pointed out by #john Cairns and #Iraj Hedyati, check assignment strategy assigned to the consumer group. Different clients create consumer groups with different default strategies. for example
when I used kafka command-line client ( java ), it has created a consumer group using the 'range' strategy.
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group aeprocessor5 --state
GROUP COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS
aeprocessor5 172.16.1.11:9092 (1003) range Stable 1
whereas when I created consumer group using go client using sarama library, it's using 'round-robin strategy.
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group aeprocessor5 --state
GROUP COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS
aeprocessor5 172.16.1.11:9092 (1003) roundrobin Stable 1
so, if the group already exists and has a different strategy then InconsistentGroupProtocolException is reported.
For me, it is because I am using same consumer group to subscribe 2 different topics accidentally.
I'm trying to create a application-level error handler for failures during the processing inside SCS application using Kafka as a message broker. I know that SCS already provides the DLQ functionality, but in my case I want to wrap failed messages with a custom wrapper type (providing the failure context (source, cause etc.))
In https://github.com/qabbasi/Spring-Cloud-Stream-DLQ-Error-Handling you can see two approaches for this scenario: one is using SCS and the other one directly Spring Integration. (both are atm not working)
According to the current reference (https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_producing_and_consuming_messages) SCS will allow to publish error messages received from the Spring Integration error channel, but unfortunately this not the case, at least for me. Although the application logs the following upon startup
o.s.i.channel.PublishSubscribeChannel : Channel 'application.errorChannel' has 2 subscriber(s).
You shouldn't use #StreamListener("errorChannel") - that is consuming from a binder destination; to capture messages sent to the errorChannel use #ServiceActivator(inputChannel = "errorChannel").
EDIT
There were several problems with your app...
The new error handling code was added in version 1.3
The autoCommitOnError is a kafka binder property
You needed an #EnableBinding(CustomDlqMessageChannel.class)
You don't really need #EnableIntegration - boot does that for you
See my commit here.
and...
$ kafka-console-producer --broker-list localhost:9092 --topic testIn
>foo
and...
$ kafka-console-consumer --bootstrap-server localhost:9092 --topic customDlqTopic --from-beginning
?
contentType>"application/x-java-object;type=com.example.demo.ErrorWrapper"
I'm trying to setup Spark Streaming to get messages from Kafka queue. I'm getting the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o30.createDirectStream.
: org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
org.apache.spark.SparkException: Couldn't find leader offsets for Set([test-topic,0])
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at scala.util.Either.fold(Either.scala:97)
Here is the code I'm executing (pyspark):
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
directKafkaStream = KafkaUtils.createDirectStream(ssc, ["test-topic"], {"metadata.broker.list": "host.domain:9092"})
ssc.start()
ssc.awaitTermination()
There were a couple of similar posts with the same error. In all cases the cause was the empty kafka topic. There are messages in my "test-topic". I can get them out with
kafka-console-consumer --zookeeper host.domain:2181 --topic test-topic --from-beginning --max-messages 100
Does anyone know what might be the problem?
I'm using:
Spark 1.5.2 (apache)
Kafka 0.8.2.0+kafka1.3.0 (CDH 5.4.7)
You need to check 2 things:
check if this topic and partition exists , in your case is topic is test-topic and partition is 0.
based on your code, you are trying consume message from offset 0 and it might be possible message is not available from offset 0, check what is you earliest offset and try consume from there.
Below is command to check earliest offset:
sh kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list "your broker list" --topic "topic name" --time -1
1) You have to make sure that you have already created topic test-topic
Run following command to check list of the topic
kafka-topics.sh --list --zookeeper [host or ip of zookeeper]:[port]
2) After checking your topic, you have to configure your Kafka configuration in Socket Server Settings section
listeners=PLAINTEXT://[host or ip of Kafka]:[port]
If you define short host names in /etc/hosts and use them in your kafka servers' configurations, you should change those name to ip. Or register the same short host name in your local PC or client's /etc/hosts.
Error occurred because Spark streaming lib can't resolve short hostname in the PC or client.
Another option to force creating topic if it doesn't exist. You can do this by setting property "auto.create.topics.enable" to "true" in kafkaParams map like this.
val kafkaParams = Map[String, String](
"bootstrap.servers" -> kafkaHost,
"group.id" -> kafkaGroup,
"auto.create.topics.enable" -> "true")
Using Scala 2.11 and Kafka 0.10 versions.
One of the reason for this type of error where leader cannot be found for specified topic is Problem with one's Kafka server configs.
Open your Kafka server configs :
vim ./kafka/kafka-<your-version>/config/server.properties
In the "Socket Server Settings" section , provide IP for your host if its missing :
listeners=PLAINTEXT://{host-ip}:{host-port}
I was using Kafka setup provided with MapR sandbox and was trying to access the kafka via spark code. I was getting the same error while accessing my kafka since my configuration was missing the IP.
I'm learning Spark and Kafka and came across this project kafka-spark-consumer that seems to consume messages from Kafka efficiently. This project requires to configure few kafka & zookeeper properties thats where I'm struggling. I mean what does this property mean zookeeper.broker.path? Sorry, if its a basic question.
I have configured kafka in single node and with the following properties,
broker.id=1
port=9093
log.dir=/tmp/kafka-logs-1
and zookeeper as,
zookeeper.connect=localhost:2181/brokers
zookeeper.connection.timeout.ms=6000
if i try to configure the zookeeper.broker.path with /brokers i get the following exception from the consumer,
Exception in thread "main" java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/<name>/partitions
at consumer.kafka.ReceiverLauncher.getNumPartitions(ReceiverLauncher.java:217)
at consumer.kafka.ReceiverLauncher.createStream(ReceiverLauncher.java:79)
at consumer.kafka.ReceiverLauncher.launch(ReceiverLauncher.java:51)
at com.ibm.spark.streaming.KafkaConsumer.run(KafkaConsumer.java:78)
at com.ibm.spark.streaming.KafkaConsumer.start(KafkaConsumer.java:43)
at com.ibm.spark.streaming.KafkaConsumer.main(KafkaConsumer.java:103)
Can you help me to understand what is the zookeeper broker path here and how can i configure that?
EDIT
The above error is caused due to non-existent topic, the moment i created the topic, the error went away.
As answered by user007, the /brokers directory is created by zookeeper by default.
No need of '/brokers' for zookeeper.connect property. It should be
zookeeper.connect=localhost:2181
I am not familiar with the "kafka-spark-consumer" project which you mentioned. But usually /brokers is the default node kafka creates in zookeeper. I haven't seen any library asking the user to configure it.
/brokers is the znode path under which metadata like topics are stored.
Go to kafka bin directory. Then invoke zookeeper shell - ./zookeeper-shell.sh localhost
Then do ls. You should be able to see topics and other child nodes created there.