How to manage auditing in streaming application in Kafka and spark-sql? - apache-spark

In our project we are considering using Kakfa with spark streaming, for PoC I am using spark 2.4.1 version Kafka and Java8.
I have some questions:
How to handle missing data into Kafka topics ingestion?
How to maintain the auditing for the same? What is the big data industry practice in this?
What should be the recovery mechanism to be followed? Any links or videos for the same?

How to handle missing data into Kafka topics ingestion?
I don't understand this. Does it mean missing data in Kafka topic or missing out data from Kafka topic to Spark streaming?
The first one can't be handled unless you're producer of the data and you can change according to the reason. The second one is possible if the data is still available in Kafka topic managed by retention period on Kafka cluster.
How to maintain the auditing for the same?
There are couple of things you could do. You can ask Kafka to manage those offsets by committing those offsets. Or you could write the offsets to any other location like HBase and from there you can get the message offsets upto which you've successfully processed. With latest Structured Streaming, you do not need to manage such low level details, Spark will manage in the checkpoint directory.
What should be the recovery mechanism to be followed?
It depends on which choice you're using. If you've the offset numbers in HBase, you can read from HBase and use KafkaUtils class to get messages from given offsets number using:
KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Assign[String, String](fromOffsets.keys.toList, kafkaParams, fromOffsets)
)
More details on
https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html

Related

Kafka Spark Streaming ingestion for multiple topics

We are currently ingesting Kafka messages into HDFS using Spark Streaming. So far we spawn a whole Spark job for each topic.
Since messages are produced pretty rarely for some topics (average of 1 per day), we're thinking about organising the ingestion in pools.
The idea is to avoid creating a whole container (and related resources) for this "unfrequent" topics. In fact Spark Streaming accepts a list of topics in input, so we're thinking about using this feature in order to have a single job consuming all of them.
Do you guys think the one exposed is a good strategy? We also thought about batch ingestion, but we like to keep real-time behavior so we excluded this option. Do you have any tip or suggestion?
Does Spark Streaming handle well multiple topics as a source in case of failures in terms of offset consistency etc.?
Thanks!
I think Spark should be able to handle multiple topics fine as they have support for this from a long time and yes Kafka connect is not confluent API. Confluent does provide connectors for their cluster but you can use it too. You can see that Apache Kafka also has documentation for Connect API.
It is little difficult with Apache version of Kafka, but you can use it.
https://kafka.apache.org/documentation/#connectapi
Also if you're opting for multiple kafka topics in single spark streaming job, you may need to think about not creating small files as your frequency seems very less.

Exactly once with Kafka + Spark Streaming

Is it possible to achieve exactly once by handling Kafka topic at Spark Streaming application?
To achieve exactly once you need the following things:
Exactly once on Kafka producer to Kafka broker. This is achieved by Kafka's 0.11 idempotent producer. But is Kafka 0.11 to Spark Streaming integration production ready? I found this JIRA ticket with lots of bugs.
Exactly once on Kafka broker to Spark Streaming app. Could it be achieved? Because of Spark Streaming app failures, the application can read some data twice, right? As solution, can I persist computation results & last handled event uuid to Redis transactionaly?
Exactly once on trasforming data by Spark Streaming app. This is out-of-the-box property of RDD.
Exactly once on persisting results. Is solved at the 2nd statement by transactionaly persisting last event uuid to Redis.

Reading a Message from Kafka and writing to HDFS

I'm looking for the best way to read messages (alot of messages, around 100B each day) from Kafka, after reading the message I need to make manipulate on data and write it into HDFS.
If I need to do it with the best performance, What is the best way for me to read messages from Kafka and write file into HDFS?
Which programming language is best for that?
Do I need to consider to use solutions like Spark for that?
You should use Spark streaming for this (see here), it provides simple correspondence between Kafka partitions and Spark partitions.
Or you can use Use Kafka Streams (see more). Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters.
You can use Spark, Flink, NiFi, Streamsets... but Confluent provides Kafka Connect HDFS exactly for this purpose.
The Kafka Connect API is somewhat limited in transformations, so what most people do is to write a Kafka Streams job to filter/enhance the data to a secondary topic, which then is written to HDFS
Note: These options will write many files to HDFS (generally, one per Kafka topic partition)
Which programming language is best for that?
Each of the above are using Java. But you don't need to write any code yourself if using NiFi, Streamsets, or Kafka Connect

how to stream from kafka to cassandra and increment counters

I have apache access log file and i want to store access counts (total/daily/hourly) of each page in a cassandra table.
I am trying to do it by using kafka connect to stream from log file to a kafka topic. In order to increment metrics counters in Cassandra can I use Kafka Connect again? Otherwise which other tool should be used here e.g. kafka streams, spark, flink, kafka connect etc?
You're talking about doing stream processing, which Kafka can do - either with Kafka's Streams API, or KSQL. KSQL runs on top of Kafka Streams, and gives you a very simple way to build the kind of aggregations that you're talking about.
Here's an example of doing aggregations of streams of data in KSQL
SELECT PAGE_ID,COUNT(*) FROM PAGE_CLICKS WINDOW TUMBLING (SIZE 1 HOUR) GROUP BY PAGE_ID
See more at : https://www.confluent.io/blog/using-ksql-to-analyse-query-and-transform-data-in-kafka
You can take the output of KSQL which is actually just a Kafka topic, and stream that through Kafka Connect e.g. to Elasticsearch, Cassandra, and so on.
You mention other stream processing tools, they're valid too - depends in part on existing skills and language preferences (e.g. Kafka Streams is Java library, KSQL is … KSQL, Spark Streaming has Python as well as Java, etc), but also deployment preferences. Kafka Streams is just a Java library to deploy within your existing application. KSQL is deployable in a cluster, and so on.
This can be easily done with Flink, either as a batch or streaming job, and either with or without Kafka (Flink can read from files and write to Cassandra). This sort of time windowed aggregation is easily done with Flink's SQL api; see the examples here.

Streaming data from Kafka into Cassandra in real time

What's the best way to write date from Kafka into Cassandra? I would expect it to be a solved problem, but there doesn't seem to be a standard adapter.
A lot of people seem to be using Storm to read from Kafka and then write to Cassandra, but storm seems like somewhat of an overkill for simple ETL operations.
We are heavily using Kafka and Cassandra through Storm
We rely on Storm because:
there are usually a lot of distributed processing (inter-node) steps before result of original message hit Cassandra (Storm bolt topologies)
We don't need to maintain consumer state of Kafka (offset) ourselves - Storm-Kafka connector is doing it for us when all products of original message is acked within Storm
Message processing is distributed across nodes with Storm natively
Otherwise if it is a very simple case, you might effectively read messages from Kafka and write result to Cassandra without help of Storm
Recent release of Kafka came with the connector concept to support source and sinks as first class concepts in the design. With this, you do not need any streaming framework for moving data in/out of Kafka. Here is the Cassandra connector for Kafka that you can use: https://github.com/tuplejump/kafka-connect-cassandra

Resources