In many posts there is the statement - as shown below in some form or another - due to some question on shuffling, partitioning, due to JOIN, AGGR, whatever, etc.:
... In general whenever you do a spark sql aggregation or join which shuffles data this is the number of resulting partitions = 200.
This is set by spark.sql.shuffle.partitions. ...
So, my question is:
Do we mean that if we have set partitioning at 765 for a DF, for example,
That the processing occurs against 765 partitions, but that the output is coalesced / re-partitioned standardly to 200 - referring here to word resulting?
Or does it do the processing using 200 partitions after coalescing / re-partitioning to 200 partitions before JOINing, AGGR?
I ask as I never see a clear viewpoint.
I did the following test:
// genned a DS of some 20M short rows
df0.count
val ds1 = df0.repartition(765)
ds1.count
val ds2 = df0.repartition(765)
ds2.count
sqlContext.setConf("spark.sql.shuffle.partitions", "765")
// The above not included on 1st run, the above included on 2nd run.
ds1.rdd.partitions.size
ds2.rdd.partitions.size
val joined = ds1.join(ds2, ds1("time_asc") === ds2("time_asc"), "outer")
joined.rdd.partitions.size
joined.count
joined.rdd.partitions.size
On the 1st test - not defining sqlContext.setConf("spark.sql.shuffle.partitions", "765"), the processing and num partitions resulted was 200. Even though SO post 45704156 states it may not apply to DFs - this is a DS.
On the 2nd test - defining sqlContext.setConf("spark.sql.shuffle.partitions", "765"), the processing and num partitions resulted was 765. Even though SO post 45704156 states it may not apply to DFs - this is a DS.
It is a combination of both your guesses.
Assume you have a set of input data with M partitions and you set shuffle partitions to N.
When executing a join, spark reads your input data in all M partitions and re-shuffle the data based on the key to N partitions. Imagine a trivial hashpartitioner, the hash function applied on the key pretty much looks like A = hashcode(key) % N, and then this data is re-allocated to the node in charge of handling the Ath partition. Each node can be in charge of handling multiple partitions.
After shuffling, the nodes will work to aggregate the data in partitions they are in charge of. As no additional shuffling needs to be done here, the nodes can produce the output directly.
So in summary, your output will be coalesced to N partitions, however it is coalesced because it is processed in N partitions, not because spark applies one additional shuffle stage to specifically repartition your output data to N.
Spark.sql.shuffle.partitions is the parameter which decides the number of partitions while doing shuffles like joins or aggregation i.e where data movement is there across the nodes. The other part spark.default.parallelism will be calculated on basis of your data size and max block size, in HDFS it’s 128mb. So if your job does not do any shuffle it will consider the default parallelism value or if you are using rdd you can set it by your own. While shuffling happens it will take 200.
Val df = sc.parallelize(List(1,2,3,4,5),4).toDF()
df.count() // this will use 4 partitions
Val df1 = df
df1.except(df).count // will generate 200 partitions having 2 stages
Related
What I understand is
When we repartition any dataframe with value n, data will continue to remain on those n partitions, until you hit any shuffle stages or other value of repartition or coalesce.
For Shuffle, it only comes into the play when you hit any shuffle stages and data will continue to remain on those partitions until you hit coalesce or repartition.
I am right ?
If yes then, can any one point out a striking difference?
TLDR - Repartition is invoked as per developer's need but shuffle is done when there is a logical demand
I assume you're talking about config property spark.sql.shuffle.partitions and method .repartition.
As data distribution is an important aspect in any distributed environment, which not only governs parallelism but can also create adverse impacts if the distribution is uneven. However, repartitioning itself is a costly operation as it involves heavy movement of data (i.e. Shuffling). The .repartition method is used to explicitly repartition the data into new partitions - meaning to increase or decrease the number of partitions in the program based on your need. You can invoke this whenever you want.
As opposed to this, spark.sql.shuffle.partitions is a configuration property that governs the number of partitions created when a data movement happens as a result of operations like aggregations and joins.
Configures the number of partitions to use when shuffling data for
joins or aggregations.
When you're performing transformations other than join or aggregation, the above configuration won't have any impact on the number of partitions the new Dataframe will have.
Your confusion between the two is due to both operations involving shuffling. While that is true, the former (i.e. repartition) is an explicit operation where the user is dictating the framework to increase or decrease the number of partitions - which in turn causes shuffling, while in case of joins/aggregation - the shuffling is caused by the operation itself.
Basically -
Joins/Aggregations cause shuffling which causes repartitioning
repartition is asked thus, shuffling has to be done
Another method coalesce make the difference clearer.
For reference, coalesce is a variant of repartition which can only lower the number of partitions, not necessarily equal in size. As it already knows the number of partitions are only to be decreased, it can perform it with minimal shuffling (just join two adjacent partitions until the number is met).
Consider your dataframe has 4 partitions but has data only in 2 of them, thus you decide to reduce the number of partitions to 2. When using coalesce spark tries to achieve this without shuffling or with minimal shuffling.
df.rdd().getNumPartitions(); // Returns 4 with size 0, 0, 2, 4
df=df.coalesce(2); // Decrease partitions to 2
df.rdd().getNumPartitions(); // Returns 2 now with size 2, 4
So there was no shuffling involved. While the following
df1.rdd().getNumPartitions() // Returns 4
df2.rdd().getNumPartitions() // Returns 8
df1.join(df2).rdd().getNumPartitions() // Returns 200
As you've performed a join it'll always return the number of partitions based on spark.sql.shuffle.partitions
Of course I know the spark.sql.shuffle.partitionsconfig,
but for example, when I set this config 300 on the small dataset which just has 200 rows, the config is not valid, the actual partition number is just 2,
anthor example, I set this config 3000 on the dataset which has 30 billion rows, the config is not valid too, the actual partition number is just 600,
we see that when we set a big value partitions config on a small dataset, the config would be not valid,
So I just want to know How does Spark decide the partitions number of the next stage when shuffle in SparkSQL? Or How to force this config to be valid ?
My Spark SQL is just like below:
set spark.sql.shuffle.partitions=3000;
with base_data as (
select
device_id
from
table_name
where
dt = '20210621'
distribute by
rand()
)
select count(1) from base_data
In general Narrow transformation does not change number of partitions .
Wide transformations transformation does not change number of partitions.
Narrow transformation In Narrow transformation, all the elements that are required to compute the records in single partition live in the single partition of parent RDD. A limited subset of partition is used to calculate the result. Narrow transformations are the result of map(), filter().
Wide transformation — In wide transformation, all the elements that are required to compute the records in the single partition may live in many partitions of parent RDD. The partition may live in many partitions of parent RDD. Wide transformations are the result of groupbyKey and reducebyKey.
Update after question change:
you can assume "spark.sql.shuffle.partitions" as a query hint where we are forcing executors that make that number of partitions for joins or aggregations in my view we should not play with this value unless we are very sure that what are no of grouping key would be.
This will make unnecessary shuffling of data over the network.
I'm trying to do the following crossJoin on two dataframes with 5 rows each, but Spark spawns 40000 tasks on my machine and it took 30 seconds to achieve the task. Any idea why that is happening?
df = spark.createDataFrame([['1','1'],['2','2'],['3','3'],['4','4'],['5','5']]).toDF('a','b')
df = df.repartition(1)
df.select('a').distinct().crossJoin(df.select('b').distinct()).count()
You call a .distinct before join, it requires a shuffle, so it repartitions data based on spark.sql.shuffle.partitions property value. Thus, df.select('a').distinct() and df.select('b').distinct() result in new DataFrames each with 200 partitions, 200 x 200 = 40000
Two things - it looks like you cannot directly control the number of partitions a DF is created with, so we can first create a RDD instead (where you can specify the number of partitions) and convert it to DF. Also you can set the shuffle partitions to '1' as well. These both ensure you will have just 1 partition during the whole execution and should speed things up.
Just note that this shouldn't be an issue at all for larger datasets, for which Spark is designed (it would be faster to achieve the same result on a dataset of this size not using spark at all). So in the general case you won't really need to do stuff like this, but tune the number of partitions to your resources/data.
spark.conf.set("spark.default.parallelism", "1")
spark.conf.set("spark.sql.shuffle.partitions", "1")
df = sc.parallelize([['1','1'],['2','2'],['3','3'],['4','4'],['5','5']], 1).toDF(['a','b'])
df.select('a').distinct().crossJoin(df.select('b').distinct()).count()
spark.conf.set sets the configuration for a single execution only, if you want more permanent changes do them in the actual spark conf file
Lets say I have a dataset with 20 partitions when I was going to read some data. Then I do aggregate operation on that dataset , which would make no of partitions to be 200(because of default shuffle partitions size). Now without calling any action on that dataset so far , I apply coalesce on that same data set giving 30 partitions in coalesce operation and then call some spark action on that dataset.
So my question is, how many partitions will be in action while that dataset would be having its aggregate operation ? Will it be 30 partitions(because that was the coalesce partitions given ) only or 200 shuffle partitions ?
Editing to provide more clarification on my question:
I understand that coalesce operation in itself will not do shuffle unless we drastically changed no of partitions. I also understand that final dataset will have numPartitions size only , but my question is if I change no of partitions before calling any action on that dataframne , would that resulting action will operate on the final no of partitions we had given(in my case 30) or it will also honor intermediate partitions size that we had given in aggregate operation. So in all, I am mainly looking whether aggregation will be done with 200 partitions and then coalesce will be applied or aggregation will also be performed with 30(in my case) partitions only.
Yes, your final action will operate on partitions generated by coalesce, like in your case it's 30.
As we know there is two types of transformation narrow and wide.
Narrow transformation don't do shuffling and don't do repartitioning but wide shuffling shuffle the data between node and generate new partition.
So if you check coalesce is a wide transformation and it will create a new stage before proceeding for next transformation or action and next stage will work on shuffle partition generated by coalesce.
So yes, your actions will going to work on 30 partitions.
https://www.google.com/amp/s/data-flair.training/blogs/spark-rdd-operations-transformations-actions/amp/
Coalesce
Returns a new SparkDataFrame that has exactly numPartitions
partitions. This operation results in a narrow dependency, e.g. if you
go from 1000 partitions to 100 partitions, there will not be a
shuffle, instead each of the 100 new partitions will claim 10 of the
current partitions. If a larger number of partitions is requested, it
will stay at the current number of partitions.
However, if you're doing a drastic coalesce on a SparkDataFrame, e.g.
to numPartitions = 1, this may result in your computation taking place
on fewer nodes than you like (e.g. one node in the case of
numPartitions = 1). To avoid this, call repartition. This will add a
shuffle step, but means the current upstream partitions will be
executed in parallel (per whatever the current partitioning is).
https://spark.apache.org/docs/2.2.1/api/R/coalesce.html
Coalesce: Shuffle the data into existing number of partitions.
https://medium.com/#mrpowers/managing-spark-partitions-with-coalesce-and-repartition-4050c57ad5c4#.36o8a7b5j
I want to even out the partition size of rdds/dataframes in Spark to get rid of straggler tasks that slow my job down. I can do so using repartition(n_partition), which creates partitions of quite uniform size. However, that involves an expensive shuffle.
I know that coalesce(n_desired_partitions) is a cheaper alternative that avoids shuffling, and instead merges partitions on the same executor. However, it's not clear to me whether this function tries to create partitions of roughly uniform size, or simply merges input partitions without regard to their sizes.
For example, let's say that the following we have an Rdd of the integers in the range [1,12] in three partitions as follows: [(1,2,3,4,5,6,7,8),(9,10),(11,12)]. Let's say these are all on the same executor.
Now I call rdd.coalesce(2). Will the algorithm that powers coalesce know to merge the two small partitions (because they're smaller and we want balanced partition sizes), rather than just merging two arbitrary partitions?
Discussion of this topic elsewhere
According to this presentation (skip to 7:27) Netflix big data team needed to implement a custom coalese function to balance partition sizes. See also SPARK-14042.
Why this question's not a duplicate
There is a more general question about the differences between partition and coalesce here, but nobody gets there explains whether the algorithm that powers coalesce tries to balance partition size.
So actually repartition is nothing its def is look like below
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
}
So its simply coalesce with shuffle but when call coalesce its shuffle will be by default false so it will not shuffle the data till its will not needed.
Example you have 2 cluster node and each have 2 partitions and now u call rdd.coalesce(2) so it will merge the local partitions of the node or if you call the coalesce(1) then it will need the shuffle because other 2 partition will be on another node so may be in your case it will join local node partitions and that node have less number of partitions so ur partition size is not uniform.
ok according to your editing of question i also try to do the same as follows
val data = sc.parallelize(List(1,2,3,4,5,6,7,8,9,10,11,12))
data.getNumPartitions
res2: Int = 4
data.mapPartitionsWithIndex{case (a,b)=>println("partitionssss"+a);b.map(y=>println("dataaaaaaaaaaaa"+y))}.count
the output of above code will be
And now i coalesce the 4 partition to 2 and run the same code on that rdd to check how optimize spark coalesce the data so the output will be
Now you can easily see that the spark equally distribute the data to both the partitions 6-6 even before coalesce it the number of elements are not same in all partitions.
val coal=data.coalesce(2)
coal.getNumPartitions
res4: Int = 2
coal.mapPartitionsWithIndex{case (a,b)=>println("partitionssss"+a);b.map(y=>println("dataaaaaaaaaaaa"+y))}.count