Prevent spark catalyst from optimizing and moving dynamic parallelism - apache-spark

I need to dynamically set spark.sql.shuffle.partitions during the execution of my spark job.
Initially, it is set when starting the job, but then after various aggregations, I need to decrease it over and over again.
However, catalyst tends to push this backward (into earlier operations) - even though I do not want it to happen.
My current workaround is to use a checkpoint which breaks the lineage of catalyst.
But a checkpoint 1) writes to disk 2) needs to have the previous operation cached, otherwise, it is recomputed.
This means I need to cache and checkpoint i.e. write to disk twice if the data is too large for the memory.
Obviously, this is slow and less than ideal.
Is there another way to tell catalyst to not apply the decreased parallelism before the point where I actually want it to happen in the lineage?

Related

Why there are so many partitions required before shuffling data in Apache Spark?

Background
I am a newbie in Spark and want to understand about shuffling in spark.
I have two following questions about shuffling in Apache Spark.
1) Why there is change in no. of partitions before performing shuffling ? Spark does it by default by changing partition count to value given in spark.sql.shuffle.partitions.
2) Shuffling usually happens when there is a wide transformation. I have read in a book that data is also saved on disk. Is my understanding correct ?
Two questions actually.
Nowhere it it stated that you need to change this parameter. 200 is the default if not set. It applies to JOINing and AGGregating. You make have a far bigger set of data that is better served by increasing the number of partitions for more processing capacity - if more Executors are available. 200 is the default, but if your quantity is huge, more parallelism if possible will speed up processing time - in general.
Assuming an Action has been called - so as to avoid the obvious comment if this is not stated, assuming we are not talking about ResultStage and a broadcast join, then we are talking about ShuffleMapStage. We look at an RDD initially:
DAG dependency involving a shuffle means creation of a separate Stage.
Map operations are followed by Reduce operations and a Map and so forth.
CURRENT STAGE
All the (fused) Map operations are performed intra-Stage.
The next Stage requirement, a Reduce operation - e.g. a reduceByKey, means the output is hashed or sorted by key (K) at end of the Map
operations of current Stage.
This grouped data is written to disk on the Worker where the Executor is - or storage tied to that Cloud version. (I would have
thought in memory was possible, if data is small, but this is an architectural Spark
approach as stated from the docs.)
The ShuffleManager is notified that hashed, mapped data is available for consumption by the next Stage. ShuffleManager keeps track of all
keys/locations once all of the map side work is done.
NEXT STAGE
The next Stage, being a reduce, then gets the data from those locations by consulting the Shuffle Manager and using Block Manager.
The Executor may be re-used or be a new on another Worker, or another Executor on same Worker.
Stages mean writing to disk, even if enough memory present. Given finite resources of a Worker it makes sense that writing to disk occurs for this type of operation. The more important point is, of course, the 'Map Reduce' style of implementation.
Of course, fault tolerance is aided by this persistence, less re-computation work.
Similar aspects apply to DFs.

Is there an extra overhead to cache Spark dataframe in memory?

I am new to Spark and wanted to understand if there is an extra overhead/delay to persist and un-persist a dataframe in memory.
From what I know so far that there is not data movement that happens when we used cache a dataframe and it is just saved on executor's memory. So it should be just a matter of setting/unsetting a flag.
I am caching a dataframe in a spark streaming job and wanted to know if this could lead to additional delay in batch execution.
if there is an extra overhead/delay to persist and un-persist a dataframe in memory.
It depends. If you only mark a DataFrame to be persisted, nothing really happens since it's a lazy operation. You have to execute an action to trigger DataFrame persistence / caching. With the action you do add an extra overhead.
Moreover, think of persistence (caching) as a way to precompute data and save it closer to executors (memory, disk or their combinations). This moving data from where it lives to executors does add an extra overhead at execution time (even if it's just a tiny bit).
Internally, Spark manages data as blocks (using BlockManagers on executors). They're peers to exchange blocks on demand (using torrent-like protocol).
Unpersisting a DataFrame is simply to send a request (sync or async) to BlockManagers to remove RDD blocks. If it happens in async manner, the overhead is none (minus the extra work executors have to do while running tasks).
So it should be just a matter of setting/unsetting a flag.
In a sense, that's how it is under the covers. Since a DataFrame or an RDD are just abstractions to describe distributed computations and do nothing at creation time, this persist / unpersist is just setting / unsetting a flag.
The change can be noticed at execution time.
I am caching a dataframe in a spark streaming job and wanted to know if this could lead to additional delay in batch execution.
If you use async caching (the default), there should be a very minimal delay.

Spark createDataFrame(df.rdd, df.schema) vs checkPoint for breaking lineage

I'm currently using
val df=longLineageCalculation(....)
val newDf=sparkSession.createDataFrame(df.rdd, df.schema)
newDf.join......
In order to save time when calculating plans, however docs say that checkpointing is the suggested way to "cut" lineage. BUT I don't want to pay the price of saving the RDD to disk.
My process is a batch process which is not-so-long and can be restarted without issues, so checkpointing is not benefit for me (I think).
What are the problems which can arise using "my" method? (Docs suggests checkpointing, which is more expensive, instead of this one for breaking lineages and I would like to know the reason)
Only think I can guess is that if some node fails after my "lineage breaking" maybe my process will fail while the checkpointed one would have worked correctly? (what If the DF is cached instead of checkpointed?)
Thanks!
EDIT:
From SMaZ answer, my own knowledge and the article which he provided. Using createDataframe (which is a Dev-API, so use at "my"/your own risk) will keep the lineage in memory (not a problem for me since I don't have memory problems and the lineage is not big).
With this, it looks (not tested 100%) that Spark should be able to rebuild whatever is needed if it fails.
As I'm not using the data in the following executions, I'll go with
cache+createDataframe versus checkpointing (which If i'm not wrong, is
actually cache+saveToHDFS+"createDataFrame").
My process is not that critical (if it crashes) since an user will be always expecting the result and they launch it manually, so if it gives problems, they can relaunch (+Spark will relaunch it) or call me, so I can take some risk anyways, but I'm 99% sure there's no risk :)
Let me start with creating dataframe with below line :
val newDf=sparkSession.createDataFrame(df.rdd, df.schema)
If we take close look into SparkSession class then this method is annotated with #DeveloperApi. To understand what this annotation means please take a look into below lines from DeveloperApi class
A lower-level, unstable API intended for developers.
Developer API's might change or be removed in minor versions of Spark.
So it is not advised to use this method for production solutions, called as Use at your own risk implementation in open source world.
However, Let's dig deeper what happens when we call createDataframe from RDD. It is calling the internalCreateDataFrame private method and creating LogicalRDD.
LogicalRDD is created when:
Dataset is requested to checkpoint
SparkSession is requested to create a DataFrame from an RDD of internal binary rows
So it is nothing but the same as checkpoint operation without saving the dataset physically. It is just creating DataFrame From RDD Of Internal Binary Rows and Schema. This might truncate the lineage in memory but not at the Physical level.
So I believe it's just the overhead of creating another RDDs and can not be used as a replacement of checkpoint.
Now, Checkpoint is the process of truncating lineage graph and saving it to a reliable distributed/local file system.
Why checkpoint?
If computation takes a long time or lineage is too long or Depends too many RDDs
Keeping heavy lineage information comes with the cost of memory.
The checkpoint file will not be deleted automatically even after the Spark application terminated so we can use it for some other process
What are the problems which can arise using "my" method? (Docs
suggests checkpointing, which is more expensive, instead of this one
for breaking lineages and I would like to know the reason)
This article will give detail information on cache and checkpoint. IIUC, your question is more on where we should use the checkpoint. let's discuss some practical scenarios where checkpointing is helpful
Let's take a scenario where we have one dataset on which we want to perform 100 iterative operations and each iteration takes the last iteration result as input(Spark MLlib use cases). Now during this iterative process lineage is going to grow over the period. Here checkpointing dataset at a regular interval(let say every 10 iterations) will assure that in case of any failure we can start the process from last failure point.
Let's take some batch example. Imagine we have a batch which is creating one master dataset with heavy lineage or complex computations. Now after some regular intervals, we are getting some data which should use earlier calculated master dataset. Here if we checkpoint our master dataset then it can be reused for all subsequent processes from different sparkSession.
My process is a batch process which is not-so-long and can be
restarted without issues, so checkpointing is not benefit for me (I
think).
That's correct, If your process is not heavy-computation/Big-lineage then there is no point of checkpointing. Thumb rule is if your dataset is not used multiple time and can be re-build faster than the time is taken and resources used for checkpoint/cache then we should avoid it. It will give more resources to your process.
I think the sparkSession.createDataFrame(df.rdd, df.schema) will impact the fault tolerance property of spark.
But the checkpoint() will save the RDD in hdfs or s3 and hence if failure occurs, it will recover from the last checkpoint data.
And in case of createDataFrame(), it just breaks the lineage graph.

Should cache and checkpoint be used together on DataSets? If so, how does this work under the hood?

I am working on a Spark ML pipeline where we get OOM Errors on larger data sets. Before training we were using cache(); I swapped this out for checkpoint() and our memory requirements went down significantly. However, in the docs for RDD's checkpoint() it says:
It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.
The same guidance is not given for DataSet's checkpoint, which is what I am using. Following the above advice anyways, I found that the memory requirements actually increased slightly from using cache() alone.
My expectation was that when we do
...
ds.cache()
ds.checkpoint()
...
the call to checkpoint forces evaluation of the DataSet, which is cached at the same time before being checkpointed. Afterwards, any reference to ds would reference the cached partitions, and if more memory is required and the partitions are evacuated that the checkpointed partitions will be used rather than re-evaluating them. Is this true, or does something different happen under the hood? Ideally I'd like to keep the DataSet in memory if possible, but it seems there is no benefit whatsoever from a memory standpoint to using the cache and checkpoint approach.
TL;DR You won't benefit from in-memory cache (default storage level for Dataset is MEMORY_AND_DISK anyway) in subsequent actions, but you should still consider caching, if computing ds is expensive.
Explanation
Your expectation that
ds.cache()
ds.checkpoint()
...
the call to checkpoint forces evaluation of the DataSet
is correct. Dataset.checkpoint comes in different flavors, which allow for both eager and lazy checkpointing, and the default variant is eager
def checkpoint(): Dataset[T] = checkpoint(eager = true, reliableCheckpoint = true)
Therefore subsequent actions should reuse checkpoint files.
However, under the covers Spark simply applies checkpoint on the internal RDD, so the rules of evaluation didn't change. Spark evaluates action first, and then creates checkpoint (that's why caching was recommended in the first place).
So if you omit ds.cache() ds will be evaluated twice in ds.checkpoint():
Once for internal count.
Once for actual checkpoint.
Therefore nothing changed and cache is still recommended, although recommendation might might slightly weaker, compared to plain RDD, as Dataset cache is considered computationally expensive, and depending on the context, it might be cheaper to simply reload the data (note that Dataset.count without cache is normally optimized, while Dataset.count with cache is not - Any performance issues forcing eager evaluation using count in spark?).

How to do Incremental MapReduce in Apache Spark

In CouchDB and system designs like Incoop, there's a concept called "Incremental MapReduce" where results from previous executions of a MapReduce algorithm are saved and used to skip over sections of input data that haven't been changed.
Say I have 1 million rows divided into 20 partitions. If I run a simple MapReduce over this data, I could cache/store the result of reducing each separate partition, before they're combined and reduced again to produce the final result. If I only change data in the 19th partition then I only need to run the map & reduce steps on the changed section of the data, and then combine the new result with the saved reduce results from the unchanged partitions to get an updated result. Using this sort of catching I'd be able to skip almost 95% of the work for re-running a MapReduce job on this hypothetical dataset.
Is there any good way to apply this pattern to Spark? I know I could write my own tool for splitting up input data into partitions, checking if I've already processed those partitions before, loading them from a cache if I have, and then running the final reduce to join all the partitions together. However, I suspect that there's an easier way to approach this.
I've experimented with checkpointing in Spark Streaming, and that is able to store results between restarts, which is almost what I'm looking for, but I want to do this outside of a streaming job.
RDD caching/persisting/checkpointing almost looks like something I could build off of - it makes it easy to keep intermediate computations around and reference them later, but I think cached RDDs are always removed once the SparkContext is stopped, even if they're persisted to disk. So caching wouldn't work for storing results between restarts. Also, I'm not sure if/how checkpointed RDDs are supposed to be loaded when a new SparkContext is started... They seem to be stored under a UUID in the checkpoint directory that's specific to a single instance of the SparkContext.
Both use cases suggested by the article (incremental logs processing and incremental query processing) can be generally solved by Spark Streaming.
The idea is that you have incremental updates coming in using DStreams abstraction. Then, you can process new data, and join it with previous calculation either using time window based processing or using arbitrary stateful operations as part of Structured Stream Processing. Results of the calculation can be later dumped to some sort of external sink like database or file system, or they can be exposed as an SQL table.
If you're not building an online data processing system, regular Spark can be used as well. It's just a matter of how incremental updates get into the process, and how intermediate state is saved. For example, incremental updates can appear under some path on a distributed file system, while intermediate state containing previous computation joined with new data computation can be dumped, again, to the same file system.

Resources