Kafka Spark Streaming ingestion for multiple topics - apache-spark

We are currently ingesting Kafka messages into HDFS using Spark Streaming. So far we spawn a whole Spark job for each topic.
Since messages are produced pretty rarely for some topics (average of 1 per day), we're thinking about organising the ingestion in pools.
The idea is to avoid creating a whole container (and related resources) for this "unfrequent" topics. In fact Spark Streaming accepts a list of topics in input, so we're thinking about using this feature in order to have a single job consuming all of them.
Do you guys think the one exposed is a good strategy? We also thought about batch ingestion, but we like to keep real-time behavior so we excluded this option. Do you have any tip or suggestion?
Does Spark Streaming handle well multiple topics as a source in case of failures in terms of offset consistency etc.?
Thanks!

I think Spark should be able to handle multiple topics fine as they have support for this from a long time and yes Kafka connect is not confluent API. Confluent does provide connectors for their cluster but you can use it too. You can see that Apache Kafka also has documentation for Connect API.
It is little difficult with Apache version of Kafka, but you can use it.
https://kafka.apache.org/documentation/#connectapi
Also if you're opting for multiple kafka topics in single spark streaming job, you may need to think about not creating small files as your frequency seems very less.

Related

What is the best way to consume the same topic from many different kafka brokers with spark structured streaming?

I have a situation where my load is distributed between a few data centers (dc), which in each data center has its own Kafka Broker and data processors that process the data only for its data center.
So, I'll have the brokers broker1-dc1, broker1-dc2,..,broker1-dcn, and all brokers will have the same topics, e.g. DATA_TOPIC.
I want is to consume the topic DATA_TOPIC from all my different brokers and persist this data in a single data lake table, I am doing it with structured streaming, but that isn't a requirement.
I don't have much experience with spark and what I want to know is the best way that I can do this, I'm considering two options:
Have different spark jobs, in which each one consumes the data from a different data center and have a unique checkpoint location;
Have a unique job that has a consumer (Kafka readStream) for each data center, and do a union between all consumers
Which of these options are better, or Is there an even better option?
I don't know if this helps, but I'm planning to use an AWS architecture with EMR, S3, Glue, and delta lake or iceberg as table formats.
Thanks
Kafka clients can only use one bootstrap.servers at a time, so if the plan is to define N streaming dataframes, that seems like a poor design choice since one failing stream ideally shouldn't stop your application.
Instead, I'd suggest looking into using MirrorMaker2 to consolidate topics into one Kafka cluster that you'll run processing against, which should result in the same effect as the union.
Your first option is somewhat similar, but it's a tradeoff on if you want to manage N Spark applications along with their checkpoints, or N Kafka Connect processes that serve a single purpose and can be ran in one Connect cluster

When is a Kafka connector preferred over a Spark streaming solution?

With Spark streaming, I can read Kafka messages and write data to different kind of tables, for example HBase, Hive and Kudu. But this can also be done by using Kafka connectors for these tables. My question is, in which situations I should prefer connectors over the Spark streaming solution.
Also how tolerant is the Kafka connector solution? We know that with Spark streaming, we can use checkpoints and executors running on multiple nodes for fault tolerant execution, but how is fault tolerance (if possibe) achieved with Kafka connectors? By running the connector on multiple nodes?
So, generally, there should be no big difference in functionality when it comes to simply reading records from Kafka and sending them into other services.
Kafka Connect is probably easier when it comes to standard tasks since it offers various connectors out-of-the-box, so it will quite probably reduce the need of writing any code. So, if you just want to copy a bunch of records from Kafka to HDFS or Hive then it will probably be easier and faster to do with Kafka connect.
Having this in mind, Spark Streaming drastically takes over when You need to do things that are not standard i.e. if You want to perform some aggregations or calculations over records and write them to Hive, then You probably should go for Spark Streaming from the beginning.
Genrally, I found doing some substandard things with Kafka connect, like for example splitting one message to multiple ones(assuming it was for example JSON array) to be quite troublesome and often require much more work than it would be in Spark.
As for the Kafka Connect fault tolerance, as it's described in the docs this is achieved by running multiple distributed workers with same group.id, the workers redistribute tasks and connectors if one of them fails.
in which situations I should prefer connectors over the Spark streaming solution.
"It Depends" :-)
Kafka Connect is part of Apache Kafka, and so has tighter integration with Apache Kafka in terms of security, delivery semantics, etc.
If you don't want to write any code, Kafka Connect is easier because it's just JSON to configure and run
If you're not using Spark already, Kafka Connect is arguably more
straightforward to deploy (run the JVM, pass in the configuration)
As a framework, Kafka Connect is more transferable since the concepts are the same, you just plugin the appropriate connector for the technology that you want to integrate with each time
Kafka Connect handles all the tricky stuff for you like schemas, offsets, restarts, scaleout, etc etc etc
Kafka Connect supports Single Message Transform for making changes to data as it passes through the pipeline (masking fields, dropping fields, changing data types, etc etc). For more advanced processing you would use something like Kafka Streams or ksqlDB.
If you are using Spark, and it's working just fine, then it's not necessarily prudent to rip it up to use Kafka Connect instead :)
Also how tolerant is the Kafka connector solution? … how is fault tolerance (if possibe) achieved with Kafka connectors?
Kafka Connect can be run in distributed mode, in which you have one or more worker processes across nodes. If a worker fails, Kafka Connect rebalances the tasks across the remaining ones. If you add a worker in, Kafka Connect will rebalance to ensure workload distribution. This was drastically improved in Apache Kafka 2.3 (KIP-415)
Kafka Connect uses the Kafka consumer API and tracks offsets of records delivered to a target system in Kafka itself. If the task or worker fails you can be sure that it will restart from the correct point. Many connectors support exactly-once delivery too (e.g. HDFS, Elasticsearch, etc)
If you want to learn more about Kafka Connect see the docs here and my talk here. See a list of connectors here, and tutorial videos here.
Disclaimer: I work for Confluent and a big fan of Kafka Connect :-)

Why do we need kafka to feed data to apache spark

I am reading about spark and its real-time stream processing.I am confused that If spark can itself read stream from source such as twitter or file, then Why do we need kafka to feed data to spark? It would be great if someone explains me what advantage we get if we use spark with kafka. Thank you.
Kafka offers a decoupling and buffering of your input stream.
Take Twitter data for example, afaik you connect to the twitter api and get a constant stream of tweets that match criteria you specified. If you now shut down your Spark jobs for an hour do to some mainentance on your servers or roll out a new version, then you will miss tweets from that hour.
Now imagine you put Kafka in front of your Spark jobs and have a very simple ingest thread that does nothing but connect to the api and write tweets to Kafka, where the Spark jobs retrieve them from. Since Kafka persists everything to disc, you can shut down your processing jobs, perform maintenance and when they are restarted, they will retrieve all data from the time they were offline.
Also, if you change your processing jobs in a significant way and want to reprocess data from the last week, you can easily do that if you have Kafka in your chain (provided you set your retention time high enough) - you'd simply roll out your new jobs and change the offsets in Kafka so that your jobs reread old data and once that is done your data store is up to date with your new processing model.
There is a good article on the general principle written by Jay Kreps, one of the people behind Kafka, give that a read if you want to know more.
Kafka decouples everything,Consumer-Producer need not to know about each other.
Kafka provides pub-sub model based on topic.
From multiple sources you can write data(messages) to any topic in kafka, and consumer(spark or anything) can consume data based on topic.
Multiple consumer can consume data from same topic as kafka stores data for period of time.
But at the end, it's depends ion your use-case if you really need a broker.

Spark Streaming and Kafka: one cluster or several standalone boxes?

I am about taking a decision about using Spark-Streaming Kafka integration.
I have a Kafka topic (I can break it into several topics) queuing several dozens of thousands of messages per minute, my spark streaming application ingest the messages by applying transformations, and then update a UI.
Knowing that all failures are handled and data are replicated in Kafka, what is the best option for implementing the Spark Streaming application in order to achieve the best possible performance and robustness:
One Kafka topic and one Spark cluster.
Several Kafka topics and several stand-alone Spark boxes (one machine with stand alone spark cluster for each topic)
Several Kafka topics and one Spark cluster.
I am tempted to go for the second option, but I couldn't find people talking about such a solution.
An important element to consider in this case is the partitioning of the topic.
The parallelism level of your Kafka-Spark integration will be determined by the number of partitions of the topic. The direct Kafka model simplifies the consumption model by establishing a 1:1 mapping between the number of partitions of the topic and RDD partitions for the corresponding Spark job.
So, the recommended setup would be: one Kafka topic with n partitions (where n is tuned for your usecase) and a Spark cluster with enough resources to process the data from those partitions in parallel.
Option #2 feels like trying to re-implement what Spark gives you out of the box: Spark gives you resilient distributed computing. Option #2 is trying to parallelize the payload over several machines and deal with failure by having independent executors. You get that with a single Spark cluster, with the benefit of improved resource usage and a single deployment.
Option 1 is straight forward, simple and probably more efficient. If your requirements are met, that's the one to go for (And honor the KISS Principle).

Streaming data from Kafka into Cassandra in real time

What's the best way to write date from Kafka into Cassandra? I would expect it to be a solved problem, but there doesn't seem to be a standard adapter.
A lot of people seem to be using Storm to read from Kafka and then write to Cassandra, but storm seems like somewhat of an overkill for simple ETL operations.
We are heavily using Kafka and Cassandra through Storm
We rely on Storm because:
there are usually a lot of distributed processing (inter-node) steps before result of original message hit Cassandra (Storm bolt topologies)
We don't need to maintain consumer state of Kafka (offset) ourselves - Storm-Kafka connector is doing it for us when all products of original message is acked within Storm
Message processing is distributed across nodes with Storm natively
Otherwise if it is a very simple case, you might effectively read messages from Kafka and write result to Cassandra without help of Storm
Recent release of Kafka came with the connector concept to support source and sinks as first class concepts in the design. With this, you do not need any streaming framework for moving data in/out of Kafka. Here is the Cassandra connector for Kafka that you can use: https://github.com/tuplejump/kafka-connect-cassandra

Resources