Suppose we have a PySpark dataframe with data spread evenly across 2048 partitions, and we want to coalesce to 32 partitions to write the data back to HDFS. Using coalesce is nice for this because it does not require an expensive shuffle.
But one of the downsides of coalesce is that it typically results in an uneven distribution of data across the new partitions. I assume that this is because the original partition IDs are hashed to the new partition ID space, and the number of collisions is random.
However, in principle it should be possible to coalesce evenly, so that the first 64 partitions from the original dataframe are sent to the first partition of the new dataframe, the next 64 are send to the second partition, and so end, resulting in an even distribution of partitions. The resulting dataframe would often be more suitable for further computations.
Is this possible, while preventing a shuffle?
I can force the relationship I would like between initial and final partitions using a trick like in this question, but Spark doesn't know that everything from each original partition is going to a particular new partition. Thus it can't optimize away the shuffle, and it runs much slower than coalesce.
In your case you can safely coalesce the 2048 partitions into 32 and assume that Spark is going to evenly assign the upstream partitions to the coalesced ones (64 for each in your case).
Here is an extract from the Scaladoc of RDD#coalesce:
This results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions.
Consider that also how your partitions are physically spread across the cluster influence the way in which coalescing happens. The following is an extract from CoalescedRDD's ScalaDoc:
If there is no locality information (no preferredLocations) in the parent, then the coalescing is very simple: chunk parents that are close in the Array in chunks.
If there is locality information, it proceeds to pack them with the following four goals:
(1) Balance the groups so they roughly have the same number of parent partitions
(2) Achieve locality per partition, i.e. find one machine which most parent partitions prefer
(3) Be efficient, i.e. O(n) algorithm for n parent partitions (problem is likely NP-hard)
(4) Balance preferred machines, i.e. avoid as much as possible picking the same preferred machine
Related
What I understand is
When we repartition any dataframe with value n, data will continue to remain on those n partitions, until you hit any shuffle stages or other value of repartition or coalesce.
For Shuffle, it only comes into the play when you hit any shuffle stages and data will continue to remain on those partitions until you hit coalesce or repartition.
I am right ?
If yes then, can any one point out a striking difference?
TLDR - Repartition is invoked as per developer's need but shuffle is done when there is a logical demand
I assume you're talking about config property spark.sql.shuffle.partitions and method .repartition.
As data distribution is an important aspect in any distributed environment, which not only governs parallelism but can also create adverse impacts if the distribution is uneven. However, repartitioning itself is a costly operation as it involves heavy movement of data (i.e. Shuffling). The .repartition method is used to explicitly repartition the data into new partitions - meaning to increase or decrease the number of partitions in the program based on your need. You can invoke this whenever you want.
As opposed to this, spark.sql.shuffle.partitions is a configuration property that governs the number of partitions created when a data movement happens as a result of operations like aggregations and joins.
Configures the number of partitions to use when shuffling data for
joins or aggregations.
When you're performing transformations other than join or aggregation, the above configuration won't have any impact on the number of partitions the new Dataframe will have.
Your confusion between the two is due to both operations involving shuffling. While that is true, the former (i.e. repartition) is an explicit operation where the user is dictating the framework to increase or decrease the number of partitions - which in turn causes shuffling, while in case of joins/aggregation - the shuffling is caused by the operation itself.
Basically -
Joins/Aggregations cause shuffling which causes repartitioning
repartition is asked thus, shuffling has to be done
Another method coalesce make the difference clearer.
For reference, coalesce is a variant of repartition which can only lower the number of partitions, not necessarily equal in size. As it already knows the number of partitions are only to be decreased, it can perform it with minimal shuffling (just join two adjacent partitions until the number is met).
Consider your dataframe has 4 partitions but has data only in 2 of them, thus you decide to reduce the number of partitions to 2. When using coalesce spark tries to achieve this without shuffling or with minimal shuffling.
df.rdd().getNumPartitions(); // Returns 4 with size 0, 0, 2, 4
df=df.coalesce(2); // Decrease partitions to 2
df.rdd().getNumPartitions(); // Returns 2 now with size 2, 4
So there was no shuffling involved. While the following
df1.rdd().getNumPartitions() // Returns 4
df2.rdd().getNumPartitions() // Returns 8
df1.join(df2).rdd().getNumPartitions() // Returns 200
As you've performed a join it'll always return the number of partitions based on spark.sql.shuffle.partitions
What is spark.sql.shuffle.partitions in a more technical sense? I have seen answers like here which says: "configures the number of partitions that are used when shuffling data for joins or aggregations."
What does that actually mean? How does shuffling work from node to node differently when this number is higher or lower?
Thanks!
Partitions define where data resides in your cluster. A single partition can contain many rows, but all of them will be processed together in a single task on one node.
Looking at edge cases, if we re-partition our data into a single partition, even if you have 100 executors, it will be only processed by one.
On the other hand, if you have a single executor, but multiple partitions, they will be all (obviously) processed on the same machine.
Shuffles happen, when one executor needs data from another - basic example is groupBy aggregation operation, as we need all related rows to calculate result. Irrespective of how many partitions we had before groupBy, after it spark will split results into spark.sql.shuffle.partitions
Quoting after "Spark - the definitive guide" by Bill Chambers and Matei Zaharia:
A good rule of thumb is that the number of partitions should be larger than the number of executors on your cluster, potentially by multiple factors depending on the workload. If you are running code on your local machine, it would behoove you to set this value lower because your local machine is unlikely to be able to execute that number of tasks in parallel.
So, to sum up, if you set this number lower than your cluster's capacity to run tasks, you won't be able to use all of its resources. On the other hand, since tasks are run on a single partitions, having thousands of small partitions would (I expect) have some overhead.
spark.sql.shuffle.partitions is the parameter which determines how many blocks your shuffle will be performed in.
Say you had 40Gb of data and had spark.sql.shuffle.partitions set to 400 then your data will be shuffled in 40gb / 400 sized blocks (assuming your data is evenly distributed).
By changing the spark.sql.shuffle.partitions you change the size of blocks being shuffled and the number of blocks for each shuffle stage.
As Daniel says a rule of thumb is to never have spark.sql.shuffle.partitions set lower than the number of cores for a job.
I have an rdd which I re-partition by one field
rdd = rdd.repartition( new Column("block_id"));
and save it to hdfs.
I would expect that if there are 20 different block_id's, the repartitioning would produce 20 new partitions each holding a different block_id.
But in fact after repartitioning there are 19 partitions, each holding exactly one block_id and one partition holding two block_id's.
This means that the core writing the partition with the two block_id's to disk takes twice the time compared to the other cores and therefore doubling the overall time.
Spark Dataset uses hash partitioning. There is no guarantee that there will be no hash colisions so you cannot expect:
that if there are 20 different block_id's, the repartitioning would produce 20 new partitions each holding a different block_id
You can try to increase number of partitions but it using number which offers good guarantees is rather impractical.
With RDDs you can design your own partitioner How to Define Custom partitioner for Spark RDDs of equally sized partition where each partition has equal number of elements?
When does an RDD get it's preferred location? How is the preferred location determined?
I've seen some weird behaviors in repartition and coalesce I could not quite make sense of:
1. When coalescing form n to n-1 partitions, I see spark just coalesce one partition to another single partition. (I think the ideal behavior would be evenly distribute to all n-1 nodes)
When run repartition I see spark repartition such that one node have multiple partition of rdds.
Does the above behavior have something to do with preferedLocations?
Note that rdd.repartition(n) just calls rdd.coalesce(n, shuffle = true), so we're just comparing shuffle true vs false.
shuffle = false
In this mode, Spark constructs a new RDD whose partitions contain one or more partitions of the parent RDD -- if you coalesce from n partitions -> n/2 partitions, then each partition consists of the elements from two semi-random partitions in the parent. This mode is appropriate when you want to reduce partitioning and the partitions are already balanced, like when you've done a filter that affects elements in each partition roughly equally. The overhead is very low. Also, note that it's impossible to increase number of partitions with this mode.
shuffle = true
For some background, I recommend this blog post for learning a bit more about how and why we shuffle. The fundamental differences in this execution mode are:
higher overhead (all data is transmitted over network)
good for rebalancing partitions (if you perform a filter that drops out either all elements in a partition or none, then shuffle=false will produce imbalanced partitions, but shuffle=true will resolve the issue)
can increase the number of partitions
Preferred locations don't have much to do with it -- you're seeing preferred locations only in the shuffle = false mode because the locality is preserved without shuffles, but after a shuffle the original preferredLocations are irrelevant (replaced with new preferred locations about shuffle destinations).
Say I have lots data in a couple of s3 files, about 5 GB each, which I read in using sc.textFile
I need to join the data from the two files, therefore, I opt to use the HashPartitioner technique, and I set a partition count of 20. The submitted job to 8 worker nodes fails without any meaningful messages. Now I am thinking maybe I need to pick a proper number of partitions.
Obviously, the idea for spark to partition up all the data based on a chosen key. In order to load them up into 20 partitions, I imagine spark will have to read thru every line of data, compute its hash, and load into the memory of the matching partition, which resides in one of the 8 worker nodes. If there is enough collective memory in the worker nodes, I assume this goes smoothly. At the end of the read, all the data is in the proper partition, in the right node's memory. Am I right so far?
However, if the total memory can not fit all the data, I imagine Spark will work on certain partitions first. And after processing these first partitions, it flushes the original partitions and reads from the source files again, loading remaining data into new partitions. This would mean reading the same file as many time as necessary to process all partitions using available memory. Is this also correct?
Should I should calculate the number of partitions so that at least one full partition would fit into a single node's memory. Are there other guidelines to follow?