What is Lineage In Spark? - apache-spark

How lineage helps to recompute data?
For example, I'm having several nodes computing data for 30 minutes each. If one fails after 15 minutes, can we recompute data processed in 15 minutes again using lineage without giving 15 minutes again?

Everything to understand about lineage is in the definition of RDD.
So let's review that :
RDDs are immutable distributed collection of elements of your data that can be stored in memory or disk across a cluster of machines. The data is partitioned across machines in your cluster that can be operated in parallel with a low-level API that offers transformations and actions. RDDs are fault tolerant as they track data lineage information to rebuild lost data automatically on failure
So there is mainly 2 things to understand:
How does lineage get passed down in RDDs?
How does Spark work internally?
Unfortunately, these topics are quite long to discuss in a single answer. I recommend you take some time reading them along with this following article about Data Lineage.
And now to answer your question and doubts:
If an executor fails computing your data, after 15 minutes, it will go back to your last checkpoint, whether it's from the source or cache in memory and/or on disk.
Thus, it will not save you those 15 minutes that you have mentioned!

When a transformation (map or filter etc.) is called, it is not executed by Spark immediately, instead a lineage is created for each transformation. A lineage will keep track of what all transformations has to be applied on that RDD, including the location from where it has to read the data.
For example, consider the following example
val myRdd = sc.textFile("spam.txt")
val filteredRdd = myRdd.filter(line => line.contains("wonder"))
filteredRdd.count()
sc.textFile() and myRdd.filter() do not get executed immediately, it will be executed only when an Action is called on the RDD - here filteredRdd.count().
An Action is used to either save result to some location or to display it. RDD lineage information can also be printed by using the command filteredRdd.toDebugString (filteredRdd is the RDD here). Also, DAG Visualization shows the complete graph in a very intuitive manner as follows:

In Spark, Lineage Graph is a dependencies graph in between existing RDD and new RDD.
It means that all the dependencies between the RDD will be recorded in a graph, rather than the original data.
Source: What is Lineage Graph

DEF: The Spark lineage graph is the set of dependencies between
RDDs
•
Lineage graphs are maintained for each Spark application
separately
•
The lineage graph is used to re computer RDDs on demand and to
recover lost data if parts of a persisted RDD are lost
•
Note: be careful and do not confuse the lineage graph with the

Actions force the evaluation of all (upstream)
transformations in the lineage graph of the RDD they are
called on

Related

What happens to the previous RDD when it gets transformed into a new RDD?

I am a beginner in Apache Spark. What I have understood so far regarding RDDs is that, once a RDD is generated, it cannot be modified but can be transformed into another RDD. Now I have several doubts here:
If an RDD is transformed to another RDD on applying some transformation on the RDD, then what happens to the previous RDD? Are both the RDDs stored in the memory?
If an RDD is cached, and some transformation is applied on the cached RDD to generate a new RDD, then can there be a scenario that, there is not enough space in RAM to hold the newly generated RDD? In case, such a scenario occurs, how will Spark handle that?
Thanks in advance!
Due to Spark's lazy evaluation, nothing happens when you do transformations on RDDs. Spark only starts computing when you call an action (save, take, collect, ...). So to answer your questions,
The original RDD stays where it is, and the transformed RDD does not exist / has not been computed yet due to lazy evaluation. Only a query plan is generated for it.
Regardless of whether the original RDD is cached, the transformed RDD will not be computed until an action is called. Therefore running out of memory shouldn't happen.
Normally when you run out of memory, either you encounter an OOM error, or the cached RDDs in memory will be spilled onto the disk to free up memory.
In order to understand the answers of the questions you have, you need to know couple of things about spark.
Spark evaluation Model (Lazy Evaluation)
Spark Operations (Transformations and Actions)
Directed Acyclic Graph (DAG)
Answer to your first Question:
You could think of RDD as virtual data structure that does not get filled with values unless there is some action called on it which materializes the rdd/dataframe. When you perform transformations it just creates query plan which shows the lazily evaluation behavior of spark. When action gets called, it perform all the transformation based on the physical plan that gets generated. So, nothing happens to the RDDs. RDD data gets pulled into memory when action gets called.
Answer to your Second Question:
If an RDD is cached and you perform multiple transformations on top of the cache RDD, actually nothing happens to the RDDs as cache is a transformation operation. Also the RDD that you have cached would be in memory when any action would be performed. So, you won't run out of memory.
You could run into memory issues if you are trying to cache each step of the transformation, which should be avoided.(Whether to cache or not Cache a dataframe/RDD is a million dollar question as beginner but you get to understand that as you learn the basics and spark architecture)
Other workflow where you can run out of memory is when you have huge data size and you are caching the rdd after some transformation as you would like to perform multiple actions on it or it is getting used in the workflow multiple times. In this case you need to verify your cluster configuration and need to make sure that it can handle the data that you are intending to cache.

what if I use an action like createdataframe() to break a very long lineage rather than checkpoint? [duplicate]

I have a recursive spark algorithm that applies a sliding window of 10 days to a Dataset.
The original dataset is loaded from a Hive table partitioned by date.
At each iteration a complex set of operations is applied to Dataset containing the ten day window.
The last date is then inserted back into the original Hive table and the next date loaded from Hive and unioned to the remaining nine days.
I realise that I need to break the spark lineage to prevent the DAG from growing unmanageable.
I believe I have two options:
Checkpointing - involves a costly write to HDFS.
Convert to rdd and back again
spark.createDataset(myDS.rdd)
Are there any disadvantages using the second option - I am assuming this is an in memory operation and is therefore cheaper.
Check pointing and converting back to RDD are indeed the best/only ways to truncate lineage.
Many (all?) of the Spark ML Dataset/DataFrame algorithms are actually implemented using RDDs, but the APIs exposed are DS/DF due to the optimizer not being parallelized and lineage size from iterative/recursive implementations.
There is a cost to converting to and from RDD, but smaller than the file system checkpointing option.

Effective Memory Management in Spark?

Is there a defined standard for effective memory management in Spark
What if I end up creating a couple of DataFrames or RDDs and then keep on reducing that data with joins and aggregations??
Will these DataFrames or RDDs will still be holding resources until the session or job is complete??
No there is not. The lifetime of the main entity in Spark which is the RDD is defined via its lineage. When the your job makes a call to an action then the whole DAG will start getting executed. If the job was executed successfully Spark will release all reserved resources otherwise will try to re-execute the tasks that failed and reconstructing the lost RDDs based on its lineage.
Please check the following resources to get familiar with these concepts:
What is Lineage In Spark?
What is the difference between RDD Lineage Graph and Directed Acyclic Graph (DAG) in Spark?

Spark iterative/recursive algorithms - Breaking spark lineage

I have a recursive spark algorithm that applies a sliding window of 10 days to a Dataset.
The original dataset is loaded from a Hive table partitioned by date.
At each iteration a complex set of operations is applied to Dataset containing the ten day window.
The last date is then inserted back into the original Hive table and the next date loaded from Hive and unioned to the remaining nine days.
I realise that I need to break the spark lineage to prevent the DAG from growing unmanageable.
I believe I have two options:
Checkpointing - involves a costly write to HDFS.
Convert to rdd and back again
spark.createDataset(myDS.rdd)
Are there any disadvantages using the second option - I am assuming this is an in memory operation and is therefore cheaper.
Check pointing and converting back to RDD are indeed the best/only ways to truncate lineage.
Many (all?) of the Spark ML Dataset/DataFrame algorithms are actually implemented using RDDs, but the APIs exposed are DS/DF due to the optimizer not being parallelized and lineage size from iterative/recursive implementations.
There is a cost to converting to and from RDD, but smaller than the file system checkpointing option.

Misunderstanding of spark RDD fault tolerant

Many say:
Spark does not replicate data in hdfs.
Spark arranges the operations in DAG graph.Spark builds RDD lineage. If a RDD is lost they can be rebuilt with the help of lineage graph.
So there is no need of data replication as the RDDs can be recalculated from the lineage graph.
And my question is:
If a node fails, spark will only recompute the RDD partitions lost on this node, but where does the data source needed in the recompution process come from ? Do you mean its parent RDD is still there when the node fails?What if the RDD that lost some partitions didn't have parent RDD(like the RDD is from spark streaming receiver) ?
What if we lose something part way through computation?
Rely on the key insight from MR! Determinism provides safe recompute.
Track 'lineage' of each RDD. Can recompute from parents if needed.
Interesting: only need to record tiny state to do recompute.
Need parent pointer, function applied, and a few other bits.
Log 10 KB per transform rather than re-output 1 TB -> 2 TB
Source
The child RDD is metadata that describes how to calculate the RDD from the parent RDD. Read more in What is RDD dependency in Spark?
If a node fails, spark will only recompute the RDD partitions lost on this node, but where does the data source needed in the recompution process come from ? Do you mean its parent RDD is still there when the node fails?
The core idea is that you can use the lineage to recover lost RDDs because RDDs are
built from another RDD or
built from data in stable storage.
(source: RDD paper, beginning of section 2.1)
If some RDD is lost, you can just go back in the lineage until you reach some RDD or the initial data record that is still available.
The data in stable storage is replicated across multiple nodes, therefore unlikely to be lost.
As far from what I've read about Streaming Receivers, the received data seems to be saved in stable storage as well, so it behaves just like any other data source.

Resources