We are currently facing wide Partitions for certain customers. We have data in Gbs for those partitions. We tried data modelling and partitions always seems to be skewed. We were trying to use bucketing logic to minimise the partition size. Either large number of partitions are generated for low resource consuming users or Wide partitions are generated for high resource consumers.
Large number of partitions lead to heap memory bloat while wide partitions lead to slower reads.
How can i solve a situation like this ?
Related
How many number of partitions will be there if I have 100 gb of data in spar?
If someone says we have 20 will that be enough for the memory
The number of partitions depends on the data source in use. For file-based data sources that use HDFS as a storage there will be as many partitions as there are blocks. Other data sources can report this requirement differently (based on their own internal logic).
I have read that too many small partitions hurt performance because of overhead, e.g. sending a very large number of tasks to executors.
What are the downside of using maximally large partitions, e.g. why do I see recommendations in the 100s of MB range?
I can see a few potential issues:
If you lose a partition, there's a large amount of work to recompute. With many smaller partitions you may lose more often, but you will have less variance in your runtime.
If one of your few tasks on large partitions takes longer to compute than the others, this would would leave other cores un-utilized, but with smaller partitions, this can better distribute this across the cluster.
Do these issues make sense, and are there others? Thanks!
These two potential issues are correct.
For a better cluster usage, one should define partitions large enough to compute an HDFS block (128 / 256 MB in general) but avoid exceeding it for a better distribution allowing horizontal scaling for performance (maximazing CPU usage).
As for the first point, you can not assume that the variance in runtime will be less if you have smaller and large number of partitions. Let's say one of the node crashes which will result in the recomputation of the rdd partition but now you have one less node to process the data your runtime will increase irrespective of the number of partitions.
If one of your few tasks on large partitions takes longer to compute than the others It happens if you have skewed data and increasing number of partitions can solve this problem but simply increasing the number of partitions isn't always sufficient.
The max partition size should not be greater than 128M which is default block size in hdfs. But you should not also have very small size partition as it add scheduling multiple tasks overhead and maintaining large meta data as well. Similar to any multithreaded application increasing the parallelism doesn't always increase performance. And in the end it comes down to finding that optimal value for which you get max performance.
By having large partition size you will have:
Less concurrency,
Increase memory pressure for transformation which involves shuffle
More susceptible for data skew.
refer
Please refer : here to find optimal number of partitons.
Although Cassandra allows -2^63 to +2^63-1 number of paritions, is there a recommended max number of partitions beyond which performance might suffer?
After about 1 billion partitions per node full repairs (non incremental) begin to have pretty serious issues with over streaming. Particularly with smaller partitions as the validation compactions run slower.
Ideally i would recommend it by partition size not count. Somewhere around 100mb partitions and you will have more efficient compactions without too much of the expensive overhead of the partition index on reads. I wouldn't be too strict on it though as its very hand wavey on a lot of factors. Try to focus on modeling for your queries first then fine tune it if the said model ends up having too large or too many too small partitions (hundreds of millions or more sub 1k or any multi gb ~ish -- per node not total)
I have an Spark application that keeps running out of memory, the cluster has two nodes with around 30G of RAM, and the input data size is about few hundreds of GBs.
The application is a Spark SQL job, it reads data from HDFS and create a table and cache it, then do some Spark SQL queries and writes the result back to HDFS.
Initially I split the data into 64 partitions and I got OOM, then I was able to fix the memory issue by using 1024 partitions. But why using more partitions helped me solve the OOM issue?
The solution to big data is partition(divide and conquer). Since not all data could be fit into the memory, and it also could not be processed in a single machine.
Each partition could fit into memory and processed(map) in relative short time. After the data is processed for each partition. It need be merged (reduce). This is tradition map reduce
Splitting data to more partitions means that each partition getting smaller.
[Edit]
Spark using revolution concept called Resilient Distributed DataSet(RDD).
There are two types of operations, transformation and acton
Transformations are mapping from one RDD to another. It is lazy evaluated. Those RDD could be treated as intermediate result we don't wanna get.
Actions is used when you really want get the data. Those RDD/data could be treated as what we want it, like take top failing.
Spark will analysed all the operation and create a DAG(Directed Acyclic Graph) before execution.
Spark start compute from source RDD when actions are fired. Then forget it.
(source: cloudera.com)
I made a small screencast for a presentation on Youtube Spark Makes Big Data Sparking.
Spark's operators spill data to disk if it does not fit in memory,
allowing it to run well on any sized data". The issue with large
partitions generating OOM
Partitions determine the degree of parallelism. Apache Spark doc says that, the partitions size should be atleast equal to the number of cores in the cluster.
Less partitions results in
Less concurrency,
Increase memory pressure for transformation which involves shuffle
More susceptible for data skew.
Many partitions might also have negative impact
Too much time spent in scheduling multiple tasks
Storing your data on HDFS, it will be partitioned already in 64 MB or 128 MB blocks as per your HDFS configuration When reading HDFS files with spark, the number of DataFrame partitions df.rdd.getNumPartitions depends on following properties
spark.default.parallelism (Cores available for the application)
spark.sql.files.maxPartitionBytes (default 128MB)
spark.sql.files.openCostInBytes (default 4MB)
Links :
https://spark.apache.org/docs/latest/tuning.html
https://databricks.com/session/a-deeper-understanding-of-spark-internals
https://spark.apache.org/faq.html
During Spark Summit Aaron Davidson gave some tips about partitions tuning. He also defined a reasonable number of partitions resumed to below 3 points:
Commonly between 100 and 10000 partitions (note: two below points are more reliable because the "commonly" depends here on the sizes of dataset and the cluster)
lower bound = at least 2*the number of cores in the cluster
upper bound = task must finish within 100 ms
Rockie's answer is right, but he does't get the point of your question.
When you cache an RDD, all of his partitions are persisted (in term of storage level) - respecting spark.memory.fraction and spark.memory.storageFraction properties.
Besides that, in an certain moment Spark can automatically drop's out some partitions of memory (or you can do this manually for entire RDD with RDD.unpersist()), according with documentation.
Thus, as you have more partitions, Spark is storing fewer partitions in LRU so that they are not causing OOM (this may have negative impact too, like the need to re-cache partitions).
Another importante point is that when you write result back to HDFS using X partitions, then you have X tasks for all your data - take all the data size and divide by X, this is the memory for each task, that are executed on each (virtual) core. So, that's not difficult to see that X = 64 lead to OOM, but X = 1024 not.
I am planning to use Spark to process data where each individual element/row in the RDD or DataFrame may occasionally be large (up to several GB).
The data will probably be stored in Avro files in HDFS.
Obviously, each executor must have enough RAM to hold one of these "fat rows" in memory, and some to spare.
But are there other limitations on row size for Spark/HDFS or for the common serialisation formats (Avro, Parquet, Sequence File...)? For example, can individual entries/rows in these formats be much larger than the HDFS block size?
I am aware of published limitations for HBase and Cassandra, but not Spark...
There are currently some fundamental limitations related to block size, both for partitions in use and for shuffle blocks - both are limited to 2GB, which is the maximum size of a ByteBuffer (because it takes an int index, so is limited to Integer.MAX_VALUE bytes).
The maximum size of an individual row will normally need to be much smaller than the maximum block size, because each partition will normally contain many rows, and the largest rows might not be evenly distributed among partitions - if by chance a partition contains an unusually large number of big rows, this may push it over the 2GB limit, crashing the job.
See:
Why does Spark RDD partition has 2GB limit for HDFS?
Related Jira tickets for these Spark issues:
https://issues.apache.org/jira/browse/SPARK-1476
https://issues.apache.org/jira/browse/SPARK-5928
https://issues.apache.org/jira/browse/SPARK-6235