I have a pretty typical RDD scenario where I gather some data, persist it, and then use the persisted RDD multiple times for various transforms. Persisting speeds things up by an order of magnitude, so persisting is definitely warranted.
But I'm surprised at the relative speed of the different methods of persisting. If I persist using MEMORY_AND_DISK, each subsequent use of the persisted RDD takes about 10% longer than if I use MEMORY_ONLY. Why is that? I would have expected them to have the same speed if the data fits in memory, and I expected MEMORY_AND_DISK to be faster if some partitions don't fit in memory. Why do my timings consistently not show that to be true?
Your CPU typically access memory at around 10 Gb/s whereas an access to an SSD takes 600Mb/s
The partitions that don't fit into memory when MEMORY_ONLY is chosen are recomputed using the parent rdds partitionning. If you have no wide dependency that should be ok
It is impossible to tell without the context, but there are at least two cases where MEMORY_AND_DISK:
Data is larger than available memory - with MEMORY_AND_DISK partitions that doesn't fit in memory will be stored on disk.
Partitions have been evicted from memory - with MEMORY_AND_DISK there are stored on disk, with MEMORY_ONLY there are lost and have to be recomputed and eviction might trigger large GC sweep.
Finally you have to remember that _DISK can use different levels of hardware and software caching so different block might be accessed with a speed comparable to the main memory.
Related
I'm new to the spark and i am not able to find clear answer that What happens when a cached data does not fit in memory?
many places i found that If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed.
for example:lets say 500 partition is created and say 200 partition didn't cached then again we have to re-compute the remaining 200 partition by re-evaluating the RDD.
If that is the case then OOM error should never occur but it does.What is the reason?
Detailed explanation is highly appreciated.Thanks in advance
There are different ways you can persist in your dataframe in spark.
1)Persist (MEMORY_ONLY)
when you persist data frame with MEMORY_ONLY it will be cached in spark.cached.memory section as deserialized Java objects. If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level and can some times cause OOM when the RDD is too big and cannot fit in memory(it can also occur after recalculation effort).
To answer your question
If that is the case then OOM error should never occur but it does.What is the reason?
even after recalculation you need to fit those rdd in memory. if there no space available then GC will try to clean some part and try to allocate it.if not successfully then it will fail with OOM
2)Persist (MEMORY_AND_DISK)
when you persist data frame with MEMORY_AND_DISK it will be cached in spark.cached.memory section as deserialized Java objects if memory is not available in heap then it will be spilled to disk. to tackle memory issues it will spill down some part of data or complete data to disk. (note: make sure to have enough disk space in nodes other no-disk space errors will popup)
3)Persist (MEMORY_ONLY_SER)
when you persist data frame with MEMORY_ONLY_SER it will be cached in spark.cached.memory section as serialized Java objects (one-byte array per partition). this is generally more space-efficient than MEMORY_ONLY but it is a cpu-intensive task because compression is involved (general suggestion here is to use Kyro for serialization) but this still faces OOM issues similar to MEMORY_ONLY.
4)Persist (MEMORY_AND_DISK_SER)
it is similar to MEMORY_ONLY_SER but one difference is when no heap space is available then it will spill RDD array to disk the same as (MEMORY_AND_DISK) ... we can use this option when you have a tight constraint on disk space and you want to reduce IO traffic.
5)Persist (DISK_ONLY)
In this case, heap memory is not used.RDD's are persisted to disk. make sure to have enough disk space and this option will have huge IO overhead. don't use this when you have dataframes that are repeatedly used.
6)Persist (MEMORY_ONLY_2 or MEMORY_AND_DISK_2)
These are similar to above mentioned MEMORY_ONLY and MEMORY_AND_DISK. the only difference is these options replicate each partition on two cluster nodes just to be on the safe side.. use these options when you are using spot instances.
7)Persist (OFF_HEAP)
Off heap memory generally contains thread stacks, spark container application code, network IO buffers, and other OS application buffers. even you can utilize this part of the memory from RAM for caching your RDD with the above option.
I was trying to find a good answer for why the default persist for RDD is MEMORY_ONLY whereas for Dataset it is MEMORY_AND_DISK. But I couldn't find it.
Does anyone know why the default persistence levels are different?
Simply because MEMORY_ONLY is rarely useful - it is not that common in practice to have enough memory to store all required data, so you're often have to evict some of the blocks or cache data only partially.
Compared to that DISK_AND_MEMORY evicts data to disk, so no cached block is lost.
The exact reason behind choosing MEMORY_AND_DISK as a default caching mode is explained by, SPARK-3824 (Spark SQL should cache in MEMORY_AND_DISK by default):
Spark SQL currently uses MEMORY_ONLY as the default format. Due to the use of column buffers however, there is a huge cost to having to recompute blocks, much more so than Spark core. Especially since now we are more conservative about caching blocks and sometimes won't cache blocks we think might exceed memory, it seems good to keep persisted blocks on disk by default.
For rdd the default storage level for persist api is MEMORY and for dataset is MEMORY_AND_DISK
Please check the below
[SPARK-3824][SQL] Sets in-memory table default storage level to MEMORY_AND_DISK
As mentioned by #user6910411 "Spark SQL currently uses MEMORY_ONLY as the default format. Due to the use of column buffers however, there is a huge cost to having to recompute blocks, much more so than Spark core." i.e dataset/dataframe apis use column buffers to store the column datattype and column details about the raw data so in case while caching the data does not fit in to memory then it will not cache the rest of the partition and will recompute whenever needed.So in the case of dataset/dataframe the recomputation cost is more compared to rdd due to its columnar structure.So the default persist option changed to MEMORY_AND_DISK so that the blocks that does not fit in to memory will spill to disk and it will retrieved from disk whenever needed rather than recomputing next time.
I'm running Spark 2.0 in stand-alone mode, and I'm the only one submitting jobs in my cluster.
Suppose I have an RDD with 100 partitions and only 10 partitions in total would fit in memory at a time.
Let's also assume that allotted execution memory is enough and will not interfere with storage memory.
Suppose I iterate over the data in that RDD.
rdd.persist() // MEMORY_ONLY
for (_ <- 0 until 10) {
rdd.map(...).reduce(...)
}
rdd.unpersist()
For each iteration, will the first 10 partitions that are persisted always be in memory until rdd.unpersist()?
For now what I know Spark is using LRU (Less Recently Used) eviction strategy for RDD partitions as a default. They are working on adding new strategies.
https://issues.apache.org/jira/browse/SPARK-14289
This strategy remove an element which is less recently used The last used timestamp is updated when an element is put into the cache or an element is retrieved from the cache.
I suppose you will always have 10 partition in your memory, but which are stored in memory and which will get evicted depends on their use. According Apache FAQ:
Likewise, cached datasets that do not fit in memory are either spilled
to disk or recomputed on the fly when needed, as determined by the
RDD's storage level.
Thus, it depends on your configuration if other partitions are spilled to disk or recomputed on the fly. Recomputation is the default, which is not always most efficient option. You can set a dataset's storage level to MEMORY_AND_DISK to be able to avoid this.
I think I found the answer, so I'm going to answer my own question.
The eviction policy seems to be in the MemoryStore class. Here's the source code.
It seems that entries are not evicted to make place for entries in the same RDD.
I'm getting confused about spill to disk and shuffle write. Using the default Sort shuffle manager, we use an appendOnlyMap for aggregating and combine partition records, right? Then when execution memory fill up, we start sorting map, spilling it to disk and then clean up the map for the next spill(if occur), my questions are :
What is the difference between spill to disk and shuffle write? They consist basically in creating file on local file system and also record.
Admit are different, so Spill records are sorted because the are passed through the map, instead shuffle write records no because they don't pass from the map.
I have the idea that the total size of the spilled file, should be equal to the size of the Shuffle write, maybe I'm missing something, please help to understand that phase.
Thanks.
Giorgio
spill to disk and shuffle write are two different things
spill to disk - Data move from Host RAM to Host Disk - is used when there is no enough RAM on your machine, and it place part of its RAM into disk
http://spark.apache.org/faq.html
Does my data need to fit in memory to use Spark?
No. Spark's operators spill data to disk if it does not fit in memory,
allowing it to run well on any sized data. Likewise, cached datasets
that do not fit in memory are either spilled to disk or recomputed on
the fly when needed, as determined by the RDD's storage level.
shuffle write - Data move from Executor(s) to another Executor(s) - is used when data needs to move between executors (e.g. due to JOIN, groupBy, etc)
more data can be found here:
https://0x0fff.com/spark-architecture-shuffle/
http://blog.cloudera.com/blog/2015/05/working-with-apache-spark-or-how-i-learned-to-stop-worrying-and-love-the-shuffle/
An edge case example which might help clearing this issue:
You have 10 executors
Each executor with 100GB RAM
Data size is 1280MB, and is partitioned into 10 partitions
Each executor holds 128MB of data.
Assuming that the data holds one key, Performing groupByKey, will bring all the data into one partition. Shuffle size will be 9*128MB (9 executors will transfer their data into the last executor), and there won't be any spill to disk as the executor has 100GB of RAM and only 1GB of data
Regarding AppendOnlyMap :
As written in the AppendOnlyMap code (see above) - this function is
a low level implementation of a simple open hash table optimized for
the append-only use case, where keys are never removed, but the value
for each key may be changed.
The fact that two different modules uses the same low-level function doesn't mean that those functions are related in hi-level.
I am new to Spark. I have read at multiple places that using cache() on a RDD will cause it to be stored in memory but I haven't so far found clear guidelines or rules of thumb on "How to determine the max size of data" that one could cram into memory? What happens if the amount of data that I am calling "cache" on, exceeds the memory ? Will it cause my job to fail or will it still complete with a noticeable impact on Cluster performance?
Thanks!
As it is clearly stated in the official documentation with MEMORY_ONLY persistence (equivalent to cache):
If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed.
Even if data fits into memory it can be evicted if new data comes in. In practice caching is more a hint than a contract. You cannot depend on caching take place but you don't have to if it succeeds either.
Note:
Please keep in mind that the default StorageLevel for Dataset is MEMORY_AND_DISK, which will:
If the RDD does not fit in memory, store the partitions that don't fit on disk, and read them from there when they're needed.
See also:
(Why) do we need to call cache or persist on a RDD
Why do I have to explicitly tell Spark what to cache?