Worker Queue option in Kafka - multithreading

We are developing an application , which will receive time series sensor data as byte array from a set of devices via UDP. This data needs to be parsed and stored in a Cassandra Database...
We were using RabbitMQ as the message broker and using the Work Queues based consumers to parse the data and push it in to cassandra... Because of increasing traffic, we are concerned about RabbitMQ perfomance and are planning to move to Kafka... Our understanding is that the same can be implemented using consumer group in kafka .. is our understanding correct

With Apache Kafka, you can scale a topic relatively easier. In order to be able to process more data in same time you'll need:
Having multiple consumers in same consumer group, you'll be able to consume multiple messages in same time. You are limited to the number of partitions of a topic.
Increase the number of partitions for a topic, and increase the number of consumers.
Increase the number of brokers, if you still to process more data.
I will approach the scalability in the order described above, but Kafka can handle a lot. In a setup with 2 brokers, 4 partitions per topic and 2 consumers (each consumer use one thread per partition), consumer decode json to java object, enrich and store to Cassandra, it can handle 30k/s (data is batched in batch of 200 insert statements).

Related

How multiple Kafka Consumers in the same consumer group read messages from one partition in the topic?

I would like to know about how the consumers in the same consumer group read the messages from one topic which has only one partition.
For example, I have 3 consumers in one consumer group and that group is polling messages from Topic A which has partition A so if I have 1000 messages coming one by one in the Topic A how it would be delivered to 3 of the consumers.
Would it be like 3 messages will be delivered to 3 consumers parellely and once it's processed by each the another one would be delivered basically will they receive messages paraellely?
Would it be like any one consumer will fetch those messages as there is only one partition ?
Please also suggest me the best architecture approach for above scenario.
Thanks,
I want to process the multiple messages parallelly from one topic which has one partition to 4 consumers.
I am using the kafka structure with NodeJS microservices with kafkajs package.
In your scenario, only one consumer of that consumer group will read the data, most probably the first one you started. I'm not 100% sure as I never tried it out, but I assume the additional consumers will just idle without workload.
This question is essentially the same as yours.
If you want to achieve parallelity of consumers, you cannot avoid having multiple partitions, that's the main purpose of the whole partitioning concept.

What is the correct strategy to consume from partitioned EventHub to avoid QuotaExceededException?

I am consuming from an Azure EventHub instance through the Java EventHubClient implementation. My strategy has been to create and persist a PartitionReceiver for each partition and call PartitionReceiver::receiveSync periodically.
The EventHub instance has since increased the partition count to 30+ and the same logic is now throwing QuotaExceededException, looks like we're hitting the maximum receiver limit for our consumer group.
This makes me think my strategy is wrong entirely, what is the standardized way to constantly consume from all partitions of an EventHub instance without surpassing this quota?
Thanks!
Maximum number of receivers limit is 5 and it is per partition per consumer group. You should check your code and identify why you are creating 5+ receivers on at least one of the partitions.
Btw, you should implement your consumers with EventProcessorHost if you don't have a strong reason to be on EventHubClient.

Understanding Azure Event Hubs partitioned consumer pattern

Azure Event Hub uses the partitioned consumer pattern described in the docs.
I have some problems understanding the consumer side of this model when it comes to a real world scenario.
So lets say I have 1000 messages send to the event hub with 4 partitions, not defining any partition Id. This means the messages will go to all partitions using the round-robin method.
Now I want to have two applications distributing the messages to two different databases. My questions there are:
Lets say for the first application, I want to store all messages in Database 1. This means, for maximum speed, In my consumer application I need to have 4 threads (consumers), each listening to one partition of the event hub, right? Each of them also has to store their own offset for the partition they're reading (checkpoint).
Lets say my second application wants to filter the messages and only store a subset of them in Database 2. There I also need 4 consumers since I don't know which message goes to which partition, right?
Also for the two applications I need to have two consumer groups, but why? Is the filtering of the messages defined in the consumer group? I don't get it really why I need this one, since the applications consumers store the partition checkpoints by themselves and I can do the filtering within the applications itself.
I know there is the EventProcessorHost class but I want to understand the concept of the EventHub on a lower level.
Lets say for the first application, I want to store all messages in Database 1. This means, for maximum speed, In my consumer application I need to have 4 threads (consumers), each listening to one partition of the event hub, right? Each of them also has to store their own offset for the partition they're reading (checkpoint).
Correct, you should have a process per provisioned partition. So, if you have 4 processors you should have 4 processes, each processing the messages of a specific partition. If you process the messages using an EventProcessorHost it will take care of the spinning up of the processes for you.
Lets say my second application wants to filter the messages and only store a subset of them in Database 2. There I also need 4 consumers since I don't know which message goes to which partition, right?
What do you mean with a consumer? You need another 4 processes to process the messages but they should be configured to read using a different consumer group. Otherwise they will compete with the processes of 1
Also for the two applications I need to have two consumer groups, but why? Is the filtering of the messages defined in the consumer group? I don't get it really why I need this one, since the applications consumers store the partition checkpoints by themselves and I can do the filtering within the applications itself.
Let us define a consumer group:
Consumer groups enable multiple consuming applications to each have a separate view of the incoming message stream, and to read the stream independently at its own pace with its own offset
So yes, you need 2 different consumer groups.
Each consumer group will get all messages send to the event hub partitions. Each consumer group tracks its own progress in the stream of messages. That is why you need two for your scenario.
Say you define an additional consumer group called "App2-Consumer-Group", the reader processes will receive all messages but should take no action for messages they are not interested in.
If you would not create an additional consumer group, the reader processes for the default consumer group will process the messages for the first application and mark them as processed using the check-pointing mechanism. The reader processes for the second application won't get any messages since they are already marked as processed. (In real life, when using one consumer group with some messages might be picked up by the reader processes for the first application and some messages might be picked up by reader processes for the second application as the processes will try to get a lock on a specific partition)
I think this image shows clearly how consumer groups track their own progress in the stream of message and hence why you need tow of them if you have 2 different processing logic for the 2 different applications:

Multiple Spark Kafka consumers with same groupId

I am trying to have multiple consumers for multiple partitions of Kafka topic with same groupId which will help me scale the consumption of messages.
According to Kafka documentation, it says:
If all the consumer instances have the same consumer group, then the records will effectively be load-balanced over the consumer instances.
Having consumers as part of the same consumer group means providing the“competing consumers” pattern with whom the messages from topic partitions are spread across the members of the group. Each consumer receives messages from one or more partitions (“automatically” assigned to it) and the same messages won’t be received by the other consumers (assigned to different partitions). In this way, we can scale the number of consumers up to the number of the partitions (having one consumer reading only one partition);
But when I deploy multiple spark application with same groupId it gives me the following exception:
java.lang.IllegalStateException: Previously tracked partitions [cpq.cluster-1] been revoked by Kafka because of consumer rebalance. This is mostly due to another stream with same group id joined, please check if there're different streaming application misconfigure to use same group id. Fundamentally different stream should use different group id
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:200)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:228)
According to exception, I cannot have multiple consumers with the same groupId. Hence I am unable to have load balancing in my spark application; I can only assign 1 consumer per topic partition and this contradicts the Kafka documentation.
How can I have multiple consumers with the same consumer groupId to have load balancing?
Here, you don't need to execute multiple spark applications to consume from multiple partitions rather single spark application will handle this internally. Spark streaming uses a 1:1 parallelism between Kafka partitions and Spark partitions. If you execute multiple spark applications, it will give this error. Please refer this questions for more details: 2 spark stream job with same consumer group id

what is best practice to consume messages from multiple kafka topics?

I need to consumer messages from different kafka topics,
Should i create different consumer instance per topic and then start a new processing thread as per the number of partition.
or
I should subscribe all topics from a single consumer instance and the should start different processing threads
Thanks & regards,
Megha
The only rule is that you have to account for what Kafka does and doesn't not guarantee:
Kafka only guarantees message order for a single topic/partition. edit: this also means you can get messages out of order if your single topic Consumer switches partitions for some reason.
When you subscribe to multiple topics with a single Consumer, that Consumer is assigned a topic/partition pair for each requested topic.
That means the order of incoming messages for any one topic will be correct, but you cannot guarantee that ordering between topics will be chronological.
You also can't guarantee that you will get messages from any particular subscribed topic in any given period of time.
I recently had a bug because my application subscribed to many topics with a single Consumer. Each topic was a live feed of images at one image per message. Since all the topics always had new images, each poll() was only returning images from the first topic to register.
If processing all messages is important, you'll need to be certain that each Consumer can process messages from all of its subscribed topics faster than the messages are created. If it can't, you'll either need more Consumers committing reads in the same group, or you'll have to be OK with the fact that some messages may never be processed.
Obviously one Consumer/topic is the simplest, but it does add some overhead to have the additional Consumers. You'll have to determine whether that's important based on your needs.
The only way to correctly answer your question is to evaluate your application's specific requirements and capabilities, and build something that works within those and within Kafka's limitations.
This really depends on logic of your application - does it need to see all messages together in one place, or not. Sometimes, consumption from single topic could be easier to implement in terms of business logic of your application.

Resources