When a particular task fails that causes RDD to be recomputed from lineage (maybe by reading input file again), how does Spark ensure that there is no duplicate processing of data? What if the task that failed had written half of the data to some output like HDFS or Kafka ? Will it re-write that part of the data again?
Is this related to exactly once processing?
Output operation by default has at-least-once semantics. The foreachRDD function will execute more than once if there’s worker failure, thus writing same data to external storage multiple times. There’re two approaches to solve this issue, idempotent updates, and transactional updates. They are further discussed in the following sections
Further reading
http://shzhangji.com/blog/2017/07/31/how-to-achieve-exactly-once-semantics-in-spark-streaming/
Related
I'm reading an article on Apache Spark and I came across the following sentence:
"Hadoop as a big data processing technology has been around for 10 years and has proven to be the solution of choice for processing large data sets. MapReduce is a great solution for one-pass computations, but not very efficient for use cases that require multi-pass computations and algorithms." (Full article)
Searching the web yields results about the difference between one-pass and multi-pass compilers (For instance, see This SO question)
However, I'm not really sure if the answer also applies for data processing. Can somebody explain me what one-pass computation and multi-pass computation is, and why the latter is better, and thus is used in Spark?
Map Reduce
Source : https://www.guru99.com/introduction-to-mapreduce.html
Here you can see, the input file is processed as follows.
first split
goes into mapping phase
Shuffling
Reducer
In Map-reduce paradigm, after every stage the intermediate result is written to disk. Also, Mapper and Reducer are two different process. That is, first the mapper job runs, spits out the mapping files, then the reducer job starts. At every stage the job requires resource allocation. Therefore, a single map-reduce job required multiple iterations. If you have multiple map phases, after every map the data needs to spit out to disk before other map task starts. This is the multi-step process.
Each step in the data processing workflow has one Map phase and one Reduce phase and you'll need to convert any use case into MapReduce pattern to leverage this solution.
Spark
On the other hand, spark does the resource negotiation only once. Once the negotiation is completed, it spawns all the executors and that stays throughout the tenure of the job.
During the execution, spark doesn't write the intermediate output of the Map phases to the disk, rather keeps in memory. Therefore, all the map operations can happen back to back without writing to disk or spawning new executors. This is the single step process.
Spark allows programmers to develop complex, multi-step data pipelines using directed acyclic graph (DAG) pattern. It also supports in-memory data sharing across DAGs, so that different jobs can work with the same data.
One pass computations is when you are reading the dataset once whereas multipass computations is when a dataset is read once from the disk and multiple computations or operation are done on the same dataset. Apache Spark processing framework allows you to read data once which is then cached into memory and then we can perform multi pass computations on the data. These computations can be done on the dataset very quickly because the data is present into memory of the machine and apache spark does not need to read the data again from the disk which helps us to save lot of input output operations time. As per the definition of apache spark it is an in memory processing framework which means the data and transformation on which the computation is done is present in memory itself.
In CouchDB and system designs like Incoop, there's a concept called "Incremental MapReduce" where results from previous executions of a MapReduce algorithm are saved and used to skip over sections of input data that haven't been changed.
Say I have 1 million rows divided into 20 partitions. If I run a simple MapReduce over this data, I could cache/store the result of reducing each separate partition, before they're combined and reduced again to produce the final result. If I only change data in the 19th partition then I only need to run the map & reduce steps on the changed section of the data, and then combine the new result with the saved reduce results from the unchanged partitions to get an updated result. Using this sort of catching I'd be able to skip almost 95% of the work for re-running a MapReduce job on this hypothetical dataset.
Is there any good way to apply this pattern to Spark? I know I could write my own tool for splitting up input data into partitions, checking if I've already processed those partitions before, loading them from a cache if I have, and then running the final reduce to join all the partitions together. However, I suspect that there's an easier way to approach this.
I've experimented with checkpointing in Spark Streaming, and that is able to store results between restarts, which is almost what I'm looking for, but I want to do this outside of a streaming job.
RDD caching/persisting/checkpointing almost looks like something I could build off of - it makes it easy to keep intermediate computations around and reference them later, but I think cached RDDs are always removed once the SparkContext is stopped, even if they're persisted to disk. So caching wouldn't work for storing results between restarts. Also, I'm not sure if/how checkpointed RDDs are supposed to be loaded when a new SparkContext is started... They seem to be stored under a UUID in the checkpoint directory that's specific to a single instance of the SparkContext.
Both use cases suggested by the article (incremental logs processing and incremental query processing) can be generally solved by Spark Streaming.
The idea is that you have incremental updates coming in using DStreams abstraction. Then, you can process new data, and join it with previous calculation either using time window based processing or using arbitrary stateful operations as part of Structured Stream Processing. Results of the calculation can be later dumped to some sort of external sink like database or file system, or they can be exposed as an SQL table.
If you're not building an online data processing system, regular Spark can be used as well. It's just a matter of how incremental updates get into the process, and how intermediate state is saved. For example, incremental updates can appear under some path on a distributed file system, while intermediate state containing previous computation joined with new data computation can be dumped, again, to the same file system.
I understood the automated mapping that exists between a Kafka Partition and a Spark RDD partition and ultimately Spark Task. However in order to properly Size My Executor (in number of Core) and therefore ultimately my node and cluster, I need to understand something that seems to be glossed over in the documentations.
In Spark-Streaming how does exactly work the data consumption vs data processing vs task allocation, in other words:
Does a corresponding Spark task to a Kafka partition both read
and process the data altogether ?
The rational behind this question is that in the previous API, that
is, the receiver based, a TASK was dedicated for receiving the data,
meaning a number tasks slot of your executors were reserved for data
ingestion and the other were there for processing. This had an
impact on how you size your executor in term of cores.
Take for example the advise on how to launch spark-streaming with
--master local. Everyone would tell that in the case of spark streaming,
one should put local[2] minimum, because one of the
core, will be dedicated to running the long receiving task that never
ends, and the other core will do the data processing.
So if the answer is that in this case, the task does both the reading
and the processing at once, then the question that follows, is that
really smart, i mean, this sounds like asynchronous. We want to be
able to fetch while we process so on the next processing the data is
already there. However if there only one core or more precisely to
both read the data and process them, how can both be done in
parallel, and how does that make things faster in general.
My original understand was that, things would have remain somehow the
same in the sense that, a task would be launch to read but that the
processing would be done in another task. That would mean that, if
the processing task is not done yet, we can still keep reading, until
a certain memory limit.
Can someone outline with clarity what is exactly going on here ?
EDIT1
We don't even have to have this memory limit control. Just the mere fact of being able to fetch while the processing is going on and stopping right there. In other words, the two process should be asynchronous and the limit is simply to be one step ahead. To me if somehow this is not happening, i find it extremely strange that Spark would implement something that break performance as such.
Does a corresponding Spark task to a Kafka partition both read and
process the data altogether ?
The relationship is very close to what you describe, if by talking about a task we're referring to the part of the graph that reads from kafka up until a shuffle operation. The flow of execution is as follows:
Driver reads offsets from all kafka topics and partitions
Driver assigns each executor a topic and partition to be read and processed.
Unless there is a shuffle boundary operation, it is likely that Spark will optimize the entire execution of the partition on the same executor.
This means that a single executor will read a given TopicPartition and process the entire execution graph on it, unless we need to shuffle. Since a Kafka partition maps to a partition inside the RDD, we get that guarantee.
Structured Streaming takes this even further. In Structured Streaming, there is stickiness between the TopicPartition and the worker/executor. Meaning, if a given worker was assigned a TopicPartition it is likely to continue processing it for the entire lifetime of the application.
I am a beginner with Apache Spark. I am trying to run a stream job which recieves some data ,convert it into dataframe and run some processing like joining and removing duplicates etc . Now I have to cache this processed data so that I can append this with next dstream (using some union/join) and do processing again.
I tried using dataframe.cache() to cache and re use this in next stream batch.
For example,if df is rdd formed from dstream.
foreachrdd{
new =df.unionAll(processed)
new.registerTempTable("TableScheme")
sql.( //perform inner join and some other processing)
processed=new
processed.cache();
}
When we perform Dataframe.cache or Dataframe.persist() are we caching the actual data or the DAG / transformations applied ? When second stream comes, my program exits with
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
org.apache.spark.SparkException: Attempted to use BlockRDD[1] at socketTextStream after its blocks have been removed!
The cache() functionality performs the processing defined in the DAG up to the cache() call and stores the data appropriately so that computations are not repeated, should more than one action be performed.
What you want is some persistence across stream batches.
There are several ways to do that:
streamingContext.remember(Minutes(5)) holds on to data from previous batches for Minutes(5) amount of time
windowing move a fixed time window across the data, allowing you to perform operations on more than one batch of data
updateStateByKey & mapWithState provide mechanisms to maintain and transform state across batches
The approach you choose will depend largely on your use-case.
I see some failed batches in my spark streaming application because of memory related issues like
Could not compute split, block input-0-1464774108087 not found
, and I was wondering if there is a way to re process those batches on the side without messing with the current running application, just in general , does not have to be the same exact exception.
Thanks in advance
Pradeep
This may happen in cases where your data ingestion rate into spark is higher than memory allocated or can be kept. You can try changing StorageLevel to MEMORY_AND_DISK_SER so that when it is low on memory Spark can spill data to disk. This will prevent your error.
Also, I don't think this error means that any data was lost while processing, but that input block which was added by your block manager just timed out before processing started.
Check similar question on Spark User list.
Edit:
Data is not lost, it was just not present where the task was expecting it to be. As per Spark docs:
You can mark an RDD to be persisted using the persist() or cache()
methods on it. The first time it is computed in an action, it will be
kept in memory on the nodes. Spark’s cache is fault-tolerant – if any
partition of an RDD is lost, it will automatically be recomputed using
the transformations that originally created it.