Spark Streaming Real time integration with Kafka - apache-spark

I have integrated Spark Streaming Process with Kafka to read a particular topic. Created Spark Context with polling time of 5 seconds., it works fine. But in case of if I want to access feeds in real time can I further reduce it to 1 second (will it over kill ?) or is there any other better option to handle this situation.

Spark Structured streaming offers several modes or "Triggers" for processing time. You can sacrifice throughput for less latency by using the continuous processing mode. You sacrifice latency for more throughput by increasing the Trigger duration. You should be fine setting the micro-batch duration to 1s on Scala and 2s on Python.
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers

Related

Pyspark Structured Streaming continuous vs processingTime triggers

I've been looking into using triggers for a streaming job, but the differences between Continuous trigger vs processingTime trigger are not clear to me.
As far as I've read on different sites:
continuous is just an attempt to make the streaming almost real-time instead of micro-batch based (using much lower latency of 1ms).
As of the time of writing this question, only supports a couple of sources and sinks like Kafka.
Are these two points the only differences between the two triggers?
You are pretty much right Structured Streaming continuous got added in order to respond to low latency needs by achieving near-real-time processing using a continuous query, unlike the old batch way where the latency is depending on processing time and the batch job duration (aka micro-batch query)
the docs are pretty useful to get more in-depth.

Does Spark streaming receivers continue pulling data for every block interval during the current micro-batch

For every spark.streaming.blockInterval (say, 1 minute) receivers listen to streaming sources for data. Suppose the current micro-batch is taking an unnaturally long time to complete (by intention, say 20 min). During this micro-batch, would the Receivers still listens to the streaming source and store it in Spark memory?
The current pipeline runs in Azure Databricks by using Spark Structured Streaming.
Can anyone help me understand this!
With the above scenario the Spark will continue to consume/pull data from Kafka and micro batches will continue to pile up and eventually cause Out of memory (OOM) issues.
In order to avoid the scenario enable back pressure setting,
spark.streaming.backpressure.enabled=true
https://spark.apache.org/docs/latest/streaming-programming-guide.html
For more details on Spark back pressure feature

5 Minutes Spark Batch Job vs Streaming Job

I am trying to figure out what should be a better approach.
I have a Spark Batch Job which is scheduled to run every 5 mints and it takes 2-3 mints to execute.
Since Spark 2.0 have added support for dynamic allocation spark.streaming.dynamicAllocation.enabled, Is it a good idea to make its a streaming job which pulls data from source every 5 mints?
What are things I should keep in mind while choosing between streaming/batch job?
Spark Streaming is an outdated technology. Its successor is Structured Streaming.
If you do processing every 5 mins so you do batch processing. You can use the Structured Streaming framework and trigger it every 5 mins to imitate batch processing, but I usually wouldn't do that.
Structured Streaming has a lot more limitations than normal Spark. For example you can only write to Kafka or to a file, or else you need to implement the sink by yourself using Foreach sink. Also if you use a File sink then you cannot update it, but only append to it. Also there are operations that are not supported in Structured Streaming and there are actions that you cannot perform unless you do an aggrigation before.
I might use Structured Straming for batch processing if I read from or write to Kafka because they work well together and everything is pre-implemented. Another advantage of using Structured Streaming is that you automatically continue reading from the place you stopped.
For more information refer to Structured Streaming Programming Guide.
Deciding between streaming vs. batch, one needs to look into various factors. I am listing some below and based on your use case, you can decide which is more suitable.
1) Input Data Characteristics - Continuous input vs batch input
If input data is arriving in batch, use batch processing.
Else if input data is arriving continuously, stream processing may be more useful. Consider other factors to reach to a conclusion.
2) Output Latency
If required latency of output is very less, consider stream processing.
Else if latency of output does not matter, choose batch processing.
3) Batch size (time)
A general rule of thumb is use batch processing if the batch size > 1 min otherwise stream processing is required. This is because trigerring/spawning of batch process adds latency to overall processing time.
4) Resource Usage
What's the usage pattern of resources in your cluster ?
Are there more batch jobs which execute when other batch jobs are done ? Having more than one batch jobs running one after other and are using cluster respurces optimally. Then having batch jobs is better option.
Batch job runs at it's schedule time and resources in cluster are idle after that. Consider running streaming job if data is arriving continuously, less resources may be required for processing and output will become available with less latency.
There are other things to consider - Replay, Manageability (Streaming is more complex), Existing skill of team etc.
Regarding spark.streaming.dynamicAllocation.enabled, I would avoid using it because if the rate of input varies a lot, executors will be killed and created very frequently which would add to latency.

How to achieve ingestion time?

I found the distinction between different notions of time in the Apache Flink documentation in Event Time / Processing Time / Ingestion Time.
Event time is the time that each individual event occurred on its producing device.
And that's what datasets come with and so is available in Spark Structured Streaming out of the box.
Processing time refers to the system time of the machine that is executing the respective operation.
Ingestion time is the time that events enter Flink.
The two processing time and ingestion time are of my concern. I think I know how to achieve processing time, but am not sure about ingestion time (or perhaps I'm wrong and it's the opposite).
How to achieve ingestion time in Spark Structured Streaming 2.2 and later?

Spark Streaming input rate drop

Running a Spark Streaming job, I have encountered the following behavior more than once. Processing starts well: the processing time for each batch is well below the batch interval. Then suddenly, the input rate drops to near zero. See these graphs.
This happens even though the program could keep up and it slows down execution considerably. I believe the drop happens when there is not much unprocessed data left, but because the rate is so low, these final records take up most of the time needed to run the job. Is there any way to avoid this and speed up?
I am using PySpark with Spark 1.6.2 and using the direct approach for Kafka streaming. Backpressure is turned on and there is a maxRatePerPartition of 100.
Setting backpressure is more meaningful in the case of old spark streaming versions where you need receivers to consume the messages from a stream. From Spark 1.3 you have the receiver-less “direct” approach to ensure stronger end-to-end guarantees. So you do not need to worry about backpressure as spark does most of the fine tuning.

Resources