Does spark.sql.adaptive.enabled work for Spark Structured Streaming? - apache-spark

I work with Apache Spark Structured Streaming. Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. Since It builds on the Spark SQL engine, does it mean spark.sql.adaptive.enabled works for Spark Structured Streaming?

It's disabled in Spark code - See in StreamExecution:
// Adaptive execution can change num shuffle partitions, disallow
sparkSessionForStream.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key, "false")
The reason for that is because it might cause issues when having state on the stream (more details in the ticket that added this restriction - SPARK-19873 ).
If you still want to enable it for the Spark Structured Streaming (e.g. if you are sure that it won't cause any harm in your use case), you can do that inside the foreachBatch method, by setting batchDF.sparkSession.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key, "true") - this will override the Spark code which disabled it.

No. Stated in this doc https://docs.databricks.com/spark/latest/spark-sql/aqe.html. Non-streaming not supported, does not apply for AQE.
Think of statefulness, small datasets ideally. Many restrictions in Spark Structured Streaming.

Related

Structured Streaming startingOffest and Checkpoint

I am confused about startingOffsets in structured streaming.
In the official docs here, it says query type
Streaming - is this continuous streaming?
Batch - is this for query with forEachBatch or triggers? (latest is not allowed)
My workflow also has checkpoints enabled. How does this work together with startingOffsets?
If my workflow crashes and I have startingOffsets as latest, does spark check kafka offset or the spark checkpoint offset or both?
Streaming by default means "micro-batch" in Spark. Dependent on the trigger you set, it will check the source for new data on the given frequency. You can use it by
val df = spark
.readStream
.format("kafka")
.[...]
For Kafka there is also the experimental continuous trigger that allows processing of the data with quite a low latency. See the section Continuous Processing in the documentation.
Batch on the other hand works like reading a text file (such as csv) that you do once. You can do this by using
val df = spark
.read
.format("kafka")
.[...]
Note the difference in readStream for streaming and read for batch processing. In batch mode the startingOffset can only be set to earliest and even if you use checkpointing it will always start from earliest offset in case of a planned or unplanned restart.
Checkpointing in Structured Streaming needs to be set in the writeStream part and needs to be unique for every single query (in case you are running multiple streaming queries from the same source). If you have that checkpoint location set up and you restart your application Spark will only look into those checkpoint files. Only when the query gets started for the very first time it checks the startingOffset option.
Remember that Structured Streaming never commits any offsets back to Kafka. It only relies on its checkpoint files. See my other answer on How to manually set group.id and commit kafka offsets in spark structured streaming?.
In case you plan to run your application, say, once a day it is therefore best to use readStream with checkpointing enabled and the trigger writeStream.trigger(Trigger.Once). A good explanation for this approach is given in a Databricks blog on Running Streaming Jobs Once a Day For 10x Cost Savings.

Why Spark Structured Streaming is ideal for real-time operations?

I wanna construct a real-time application but I don't know if I should use Spark Streaming or Spark Structured Streaming.
I read online that Structured Streaming is ideal for real-time applications but is not clear why...
Can someone explain it?
Spark Streaming works on something we call a micro batch. ... Each batch represents an RDD. Structured Streaming works on the same architecture of polling the data after some duration, based on your trigger interval, but it has some distinction from the Spark Streaming which makes it more inclined towards real streaming.
For developers all they need to worry is that Spark streaming you will you RDDs but in Spark Structured Streaming you get Dataframes and DataSet.
If you want so very low level(i.e. per record) operations go for RDDs(i.e. Spark Streaming) and but your application can build on Dataframes and querying them like SQL in real time then go for DataFrames(i.e. Spark Structured Streaming)
Eventually RDDs can be converted to Dataframes and vice versa

Tungsten encoding in Spark SQL?

I am running a Spark application that has a series of Spark SQL statements that are executed one after the other. The SQL queries are quite complex and the application is working (generating output). These days, I am working towards improving the performance of processing within Spark.
Please suggest whether Tungsten encoding has to be enabled separately or it kicks in automatically while running Spark SQL?
I am using Cloudera 5.13 for my cluster (2 node).
It is enabled by default in spark 2.X (and maybe 1.6: but i'm not sure on that).
In any case you can do this
spark.sql.tungsten.enabled=true
That can be enabled on the spark-submit as follows:
spark-submit --conf spark.sql.tungsten.enabled=true
Tungsten should be enabled if you see a * next to the plan:
Also see: How to enable Tungsten optimization in Spark 2?
Tungsten became the default in Spark 1.5 and can be enabled in an earlier version by setting the spark.sql.tungsten.enabled = true.
Even without Tungsten, SparkSQL uses a columnar storage format with Kyro serialization to minimize storage cost.
To make sure your code benefits as much as possible from Tungsten optimizations try to use the default Dataset API with Scala (instead of RDD).
Dataset brings the best of both worlds with a mix of relational (DataFrame) and functional (RDD) transformations. DataSet APIs are the most up to date and adds type-safety along with better error handling and far more readable unit tests.

kafka streaming or spark streaming

Am using now kafka in Python.
Was wondering if Spark Kafka is needed or can we use just use kafka
through pyKafka.
My concern was Spark creates overhead (pyspark) in the process,
and if we don't use any spark functions, just Kafka streaming is required.
What are the inconvenients of using Pyspark and kafka spark ?
It totally depends on the use case at hand, as all mentioned in the comments, however I passed with the same situation a couple of months ago, I will try to transfer my knowledge and how I decided to move to kafka-streams instead of spark-streaming.
In my use case, we only used spark to do a realtime streaming from kafka, and don't do any sort of map-reduce, windowing, filtering, aggregation.
Given the above case, I did the comparison based on 3 dimentions:
Technicality
DevOps
Cost
Below image show the table of comparison I did to convince my team to migrate to use kafka-streams and suppress spark, Cost is not added in the image as it totally depends on your cluster size (HeadNode-WorkerNodes).
V.I. NOTE:
Again, this is based on your case, I just tried to give you a pointer how to do the comparison, but spark itself has lots of benefits, which is irrelevant to describe it in this question.

Spark Stateful Streaming with DataFrame

Is it possible to use DataFrame as a State / StateSpec for Spark Streaming? The current StateSpec implementation seems to allow only key-value pair data structure (mapWithState etc..).
My objective is to keep a fixed size FIFO buffer as a StateSpec that gets updated every time new data streams in. I'd like to implement the buffer in Spark DataFrame API, for compatibility with Spark ML.
I'm not entirely sure you can do this with Spark Streaming, but with the newer Dataframe-based Spark Structured Streaming you can express queries that get updated over time, given an incoming stream of data.
You can read more about Spark Structured Streaming in the official documentation.
If you are interested in interoperability with SparkML to deploy a trained model, you may also be interested in this article.

Resources