Why aren't RDDs suitable for streaming tasks? - apache-spark

I'm using Spark extensively, the core of Spark is the RDD, and as shown in the RDD paper there are limitations when it comes to streaming applications. This is an exact quote from the RDD paper.
As discussed in the Introduction, RDDs are best suited
for batch applications that apply the same operation to
all elements of a dataset. In these cases, RDDs can ef-
ficiently remember each transformation as one step in a
lineage graph and can recover lost partitions without having
to log large amounts of data. RDDs would be less
suitable for applications that make asynchronous finegrained
updates to shared state, such as a storage system
for a web application or an incremental web crawler
I don't quite understand why the RDD can't effectively manage state. How does Spark Streaming overcome these limitations?

I don't quite understand why the RDD can't effectively manage state.
It is not really about being able on not but more about the cost. We have well established mechanisms of handling finegrained changes with Write-ahead logging but managing logs is just expensive. These have to written to persistent storage, periodically merged and require expensive replaying in case of failure.
Compared to that RDDs are extremely lightweight solution. It is just a small local data structure which has to remember only its lineage (ancestors and applied transformations).
It does it mean it is not possible to create at least partially stateful system on top of Spark. Take a look at the Caffe-on-Spark architecture.
How does Spark Streaming overcome these limitations?
It doesn't or to be more precise it handles this problem externally independent of RDD abstraction. It includes using input and output operations with source specific guarantees and a fault-tolerant storage for handling received data.

It's explained elsewhere in the paper:
Existing abstractions for in-memory storage on clusters, such as distributed shared memory [24], key- value stores [25], databases, and Piccolo [27], offer an interface based on fine-grained updates to mutable state (e.g., cells in a table). With this interface, the only ways to provide fault tolerance are to replicate the data across machines or to log updates across machines. Both approaches are expensive for data-intensive workloads, as they require copying large amounts of data over the cluster network, whose bandwidth is far lower than that of RAM, and they incur substantial storage overhead.
In contrast to these systems, RDDs provide an interface based on coarse-grained transformations (e.g., map, filter and join) that apply the same operation to many data items. This allows them to efficiently provide fault tolerance by logging the transformations used to build a dataset (its lineage) rather than the actual data.1 If a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to recompute just that partition. Thus, lost data can be recovered, often quite quickly, without requiring costly replication.
As I interpret that, handling streaming applications would require the system to do lots of writing to individual cells, shoving data across the network, i/o, and other costly things. RDDs are meant to avoid all that stuff by primarily supporting functional-type operations that can be composed.
This is consistent with my recollection from about 9 months ago when I did a Spark-based MOOC on edx (sadly haven't touched it then)---as I remember, Spark doesn't even bother to compute the results of maps on RDDs until the user actually calls for some output, and that way saves a ton of computation.

Related

RDD v.s. Dataset for Spark production code

Is there any industrial guideline on writing with either RDD or Dataset for Spark project?
So far what's obvious to me:
RDD, more type safety, less optimization (in the sense of Spark SQL)
Dataset, less type safety, more optimization
Which one is recommended in production code? Seems there's no such topic found in stackoverflow so far since Spark is prevalent in the past few years.
I can already foresee the majority of the community is with Dataset :), hence let me quote first a downvote for it from this answer (and please do share opinions against it):
Personally, I find statically typed Dataset to be the least useful:
Don't provide the same range of optimizations as Dataset[Row] (although they share storage format and some execution plan optimizations it doesn't fully benefit from code generation or off-heap storage) nor access to all the analytical capabilities of the DataFrame.
There are not as flexible as RDDs with only a small subset of types supported natively.
"Type safety" with Encoders is disputable when Dataset is converted using as method. Because data shape is not encoded using a signature, a compiler can only verify the existence of an Encoder.
Here is an excerpt from "Spark: The Definitive Guide" to answer this:
When to Use the Low-Level APIs?
You should generally use the lower-level APIs in three situations:
You need some functionality that you cannot find in the higher-level APIs; for
example, if you need very tight control over physical data placement across the
cluster.
You need to maintain some legacy codebase written using RDDs.
You need to do some custom shared variable manipulation
https://www.oreilly.com/library/view/spark-the-definitive/9781491912201/ch12.html
In other words: If you don't come across these situations above, in general better use the higher-level API (Datasets/Dataframes)
RDD Limitations :
No optimization engine for input:
There is no provision in RDD for automatic optimization. It cannot make use of Spark advance optimizers like catalyst optimizer and Tungsten execution engine. We can optimize each RDD manually.
This limitation is overcome in Dataset and DataFrame, both make use of Catalyst to generate optimized logical and physical query plan. We can use same code optimizer for R, Java, Scala, or Python DataFrame/Dataset APIs. It provides space and speed efficiency.
ii. Runtime type safety
There is no Static typing and run-time type safety in RDD. It does not allow us to check error at the runtime.
Dataset provides compile-time type safety to build complex data workflows. Compile-time type safety means if you try to add any other type of element to this list, it will give you compile time error. It helps detect errors at compile time and makes your code safe.
iii. Degrade when not enough memory
The RDD degrades when there is not enough memory to store RDD in-memory or on disk. There comes storage issue when there is a lack of memory to store RDD. The partitions that overflow from RAM can be stored on disk and will provide the same level of performance. By increasing the size of RAM and disk it is possible to overcome this issue.
iv. Performance limitation & Overhead of serialization & garbage collection
Since the RDD are in-memory JVM object, it involves the overhead of Garbage Collection and Java serialization this is expensive when the data grows.
Since the cost of garbage collection is proportional to the number of Java objects. Using data structures with fewer objects will lower the cost. Or we can persist the object in serialized form.
v. Handling structured data
RDD does not provide schema view of data. It has no provision for handling structured data.
Dataset and DataFrame provide the Schema view of data. It is a distributed collection of data organized into named columns.
This was all in limitations of RDD in Apache Spark so introduced Dataframe and Dataset .
When to use Spark DataFrame/Dataset API and when to use plain RDD?
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
https://dzone.com/articles/apache-spark-3-reasons-why-you-should-not-use-rdds#:~:text=Yes!,data%20analytics%2C%20and%20data%20science.
https://data-flair.training/blogs/apache-spark-rdd-limitations/

Memory Management Pyspark

1.) I understand that "Spark's operators spills data to disk if it does not fit memory allowing it to run well on any sized data".
If this is true, why do we ever get OOM (Out of Memory) errors?
2.) Increasing the no. of executor cores increases parallelism. Would that also increase the chances of OOM, because the same memory is now divided into smaller parts for each core?
3.) Spark is much more susceptible to OOM because it performs operations in memory as compared to Hive, which repeatedly reads, writes into disk. Is that correct?
There is one angle that you need to consider there. You may get memory leaks if the data is not properly distributed. That means that you need to distribute your data evenly (if possible) on the Tasks so that you reduce shuffling as much as possible and make those Tasks to manage their own data. So if you need to perform a join, if data is distributed randomly, every Task (and therefore executor) will have to:
See what data they have
Send data to other executors (and tasks) to provide the same keys they need
Request the data that is needed by that task to the others
All that data exchange may cause network bottlenecks if you have a large dataset and also will make every Task to hold their data in memory plus whatever has been sent and temporary objects. All of those will blow up memory.
So to prevent that situation you can:
Load the data already repartitioned. By that I mean, if you are loading from a DB, try Spark stride as defined here. Please refer to the partitionColumn, lowerBound, upperBound attributes. That way you will create a number of partitions on the dataframe that will set the data on different tasks based on the criteria you need. If you are going to use a join of two dataframes, try similar approach on them so that partitions are similar (for not to say same) and that will prevent shuffling over network.
When you define partitions, try to make those values as evenly distributed among tasks as possible
The size of each partition should fit on memory. Although there could be spill to disk, that would slow down performance
If you don't have a column that make the data evenly distributed, try to create one that would have n number of different values, depending on the n number of tasks that you have
If you are reading from a csv, that would make it harder to create partitions, but still it's possible. You can either split the data (csv) on multiple files and create multiple dataframes (performing a union after they are loaded) or you can read that big csv and apply a repartition on the column you need. That will create shuffling as well, but it will be done once if you cache the dataframe already repartitioned
Reading from parquet it's possible that you may have multiple files but if they are not evenly distributed (because the previous process that generated didn't do it well) you may end up on OOM errors. To prevent that situation, you can load and apply repartition on the dataframe too
Or another trick valid for csv, parquet files, orc, etc. is to create a Hive table on top of that and run a query from Spark running a distribute by clause on the data, so that you can make Hive to redistribute, instead of Spark
To your question about Hive and Spark, I think you are right up to some point. Depending on the execute engine that Hive uses in your case (map/reduce, Tez, Hive on Spark, LLAP) you can have different behaviours. With map/reduce, as they are mostly disk operations, the chance to have a OOM is much lower than on Spark. Actually from Memory point of view, map/reduce is not that affected because of a skewed data distribution. But (IMHO) your goal should be to find always the best data distribution for the Spark job you are running and that will prevent that problem
Another consideration is if you are testing in a dev environment that doesn't have same data as in a prod environment. I suppose the data distribution should be similar although volumes may differ a lot (I am talking from experience ;)). In that case, when you assign Spark tuning parameters on the spark-submit command, they may be different in prod. So you need to invest some time on finding the best approach on dev and fine tune in prod
Huge majority of OOM in Spark are on the driver, not executors. This is usually a result of running .collect or similar actions on a dataset that won't fit in the driver memory.
Spark does a lot of work under the hood to parallelize the work, when using structured APIs (in contrast to RDDs) the chances of causing OOM on executor are really slim. Some combinations of cluster configuration and jobs can cause memory pressure that will impact performance and cause lots of garbage collection to happen so you need to address it, however spark should be able to handle low memory without explicit exception.
Not really - as above, Spark should be able to recover from memory issues when using structured APIs, however it may need intervention if you see garbage collection and performance impact.

In spark, is it possible to reuse a DataFrame's execution plan to apply it to different data sources

I have a bit complex pipeline - pyspark which takes 20 minutes to come up with execution plan. Since I have to execute the same pipeline multiple times with different data frame (as source) Im wondering is there any option for me to avoid building execution plan every time? Build execution plan once and reuse it with different source data?`
There is a way to do what you ask but it requires advanced understanding of Spark internals. Spark plans are simply trees of objects. These trees are constantly transformed by Spark. They can be "tapped" and transformed "outside" of Spark. There is a lot of devil in the details and thus I do not recommend this approach unless you have a severe need for it.
Before you go there, it important to look at other options, such as:
Understanding what exactly is causing the delay. On some managed planforms, e.g., Databricks, plans are logged in JSON for analysis/debugging purposes. We sometimes seen delays of 30+ mins with CPU pegged at 100% on a single core while a plan produces tens of megabytes of JSON and pushes them on the wire. Make sure something like this is not happening in your case.
Depending on your workflow, if you have to do this with many datasources at the same time, use driver-side parallelism to analyze/optimize plans using many cores at the same time. This will also increase your cluster utilization if your jobs have any skew in the reduce phases of processing.
Investigate the benefit of Spark's analysis/optimization to see if you can introduce analysis barriers to speed up transformations.
This is impossible because the source DataFrame affects the execution of the optimizations applied to the plan.
As #EnzoBnl pointed out, this is not possible as Tungsten applies optimisations specific to the object. What you could do instead (if possible with your data) is to split your large file into smaller chunks that could be shared between the multiple input dataframes and use persist() or checkpoint() on them.
Specifically checkpoint makes the execution plan shorter by storing a mid-point, but there is no way to reuse.
See
Data checkpointing - Saving of the generated RDDs to reliable storage. This is necessary in some stateful transformations that combine data across multiple batches. In such transformations, the generated RDDs depend on RDDs of previous batches, which causes the length of the dependency chain to keep increasing with time. To avoid such unbounded increases in recovery time (proportional to dependency chain), intermediate RDDs of stateful transformations are periodically checkpointed to reliable storage (e.g. HDFS) to cut off the dependency chains.

Difference between one-pass and multi-pass computations

I'm reading an article on Apache Spark and I came across the following sentence:
"Hadoop as a big data processing technology has been around for 10 years and has proven to be the solution of choice for processing large data sets. MapReduce is a great solution for one-pass computations, but not very efficient for use cases that require multi-pass computations and algorithms." (Full article)
Searching the web yields results about the difference between one-pass and multi-pass compilers (For instance, see This SO question)
However, I'm not really sure if the answer also applies for data processing. Can somebody explain me what one-pass computation and multi-pass computation is, and why the latter is better, and thus is used in Spark?
Map Reduce
Source : https://www.guru99.com/introduction-to-mapreduce.html
Here you can see, the input file is processed as follows.
first split
goes into mapping phase
Shuffling
Reducer
In Map-reduce paradigm, after every stage the intermediate result is written to disk. Also, Mapper and Reducer are two different process. That is, first the mapper job runs, spits out the mapping files, then the reducer job starts. At every stage the job requires resource allocation. Therefore, a single map-reduce job required multiple iterations. If you have multiple map phases, after every map the data needs to spit out to disk before other map task starts. This is the multi-step process.
Each step in the data processing workflow has one Map phase and one Reduce phase and you'll need to convert any use case into MapReduce pattern to leverage this solution.
Spark
On the other hand, spark does the resource negotiation only once. Once the negotiation is completed, it spawns all the executors and that stays throughout the tenure of the job.
During the execution, spark doesn't write the intermediate output of the Map phases to the disk, rather keeps in memory. Therefore, all the map operations can happen back to back without writing to disk or spawning new executors. This is the single step process.
Spark allows programmers to develop complex, multi-step data pipelines using directed acyclic graph (DAG) pattern. It also supports in-memory data sharing across DAGs, so that different jobs can work with the same data.
One pass computations is when you are reading the dataset once whereas multipass computations is when a dataset is read once from the disk and multiple computations or operation are done on the same dataset. Apache Spark processing framework allows you to read data once which is then cached into memory and then we can perform multi pass computations on the data. These computations can be done on the dataset very quickly because the data is present into memory of the machine and apache spark does not need to read the data again from the disk which helps us to save lot of input output operations time. As per the definition of apache spark it is an in memory processing framework which means the data and transformation on which the computation is done is present in memory itself.

Is Tachyon by default implemented by the RDD's in Apache Spark?

I'm trying to understand Spark's in memory feature. In this process i came across Tachyon
which is basically in memory data layer which provides fault tolerance without replication by using lineage systems and reduces re-computation
by check-pointing the data-sets. Now where got confused is, all these features are also achievable by Spark's standard RDDs system. So i wonder does RDDs implement Tachyon behind the curtains to implement these features? If not than what is the use of Tachyon where all of its job can be done by standard RDDs. Or am i making some mistake in relating these two? a detailed explanation or link to one will be a great help. Thank you.
What is in the paper you linked does not reflect the reality of what is in Tachyon as a release open source project, parts of that paper have only ever existed as research prototypes and never been fully integrated into Spark/Tachyon.
When you persist data to the OFF_HEAP storage level via rdd.persist(StorageLevel.OFF_HEAP) it uses Tachyon to write that data into Tachyon's memory space as a file. This removes it from the Java heap thus giving Spark more heap memory to work with.
It does not currently write the lineage information so if your data is too large to fit into your configured Tachyon clusters memory portions of the RDD will be lost and your Spark jobs can fail.

Resources