Related
Im new to using spark/scala here and im having trouble with a refactor of some of my code here. Im running Scala 2.11 using pyspark and in a spark/yarn setup. The following is working but id like to clean it up, and to get the max performance out of this. I read elsewhere that pyspark udf and lambdas can cause huge performance impact so im trying to reduce or remove them were possible.
# Reduce ingest df1 data by joining on allowed table df2
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.rdd\
.map(lambda r: Row(tag=r['tag_id'], user_uuid=r['user_uuid']))
# Type column fixed to type=2, and tag==key
ready_to_join = to_process.map(lambda r: (r[0], 2, r[1]))
# Join with cassandra table to find matches
exists_in_cass = ready_to_join\
.joinWithCassandraTable(keyspace, table3)\
.on("user_uuid", "type")\
.select("user_uuid")
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
the cassandra table is such that
CREATE TABLE keyspace.table3 (
user_uuid uuid,
type int,
key text,
value text,
PRIMARY KEY (user_uuid, type, key)
) WITH CLUSTERING ORDER BY (type ASC, key ASC)
currently ive got
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.select(col("user_uuid"), col("tag_id").alias("tag"))
ready_to_join = to_process\
.withColumn("type", sf.lit(2))\
.select('user_uuid', 'type', col('tag').alias("key"))\
.rdd\
.map(lambda x: Row(x))
# planning on using repartitionByCassandraReplica here after I get it logically working
exists_in_cass = ready_to_join\
.joinWithCassandraTable(keyspace, table3)\
.on("user_uuid", "type")\
.select("user_uuid")
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
but im getting errors like
2020-10-30 15:10:42 WARN TaskSetManager:66 - Lost task 148.0 in stage 22.0 (TID ----, ---, executor 9): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
looking help from any spark gurus out there to point me to anything stupid I am doing here.
Update
Thanks to Alex's suggestion using the spark-cassandra-connector v2.5+ gives the ability for dataframes to join directly. I updated my code to use this instead.
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.select(col("user_uuid"), col("tag_id").alias("tag"))
ready_to_join = to_process\
.withColumn("type", sf.lit(2))\
.select(col('user_uuid').alias('c1_user_uuid'), 'type', col('tag').alias("key"))\
cass_table = spark_session
.read \
.format("org.apache.spark.sql.cassandra") \
.options(table=config.table, keyspace=config.keyspace) \
.load()
exists_in_cass = ready_to_join\
.join(
cass_table,
[(cass_table["user_uuid"] == ready_to_join["c1_user_uuid"]) &
(cass_table["key"] == ready_to_join["key"]) &
(cass_table["type"] == ready_to_join["type"])])\
.select(col("c1_user_uuid").alias("user_uuid"))
exists_in_cass.explain()
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
As far as I know, in theory this should be alot faster ! But im getting errors during runtime with the database timing out.
WARN TaskSetManager:66 - Lost task 827.0 in stage 12.0 (TID 9946, , executor 4): java.io.IOException: Exception during execution of SELECT "user_uuid", "key" FROM "keyspace"."table3" WHERE token("user_uuid") > ? AND token("user_uuid") <= ? AND "type" = ? ALLOW FILTERING: Query timed out after PT2M
TaskSetManager:66 - Lost task 125.0 in stage 12.0 (TID 9215, , executor 7): com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2M
etc
I have the config for spark setup to allow for the spark extensions
--packages mysql:mysql-connector-java:5.1.47,com.datastax.spark:spark-cassandra-connector_2.11:2.5.1 \
--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions \
The DAG from spark shows all nodes completely maxed out. Should I be partitioning my data before running my join here?
The explain for this also doesnt show a direct join (explain has more code than snippet above)
== Physical Plan ==
*(6) Project [c1_user_uuid#124 AS user_uuid#158]
+- *(6) SortMergeJoin [c1_user_uuid#124, key#125L], [user_uuid#129, cast(key#131 as bigint)], Inner
:- *(3) Sort [c1_user_uuid#124 ASC NULLS FIRST, key#125L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(c1_user_uuid#124, key#125L, 200)
: +- *(2) Project [id#0 AS c1_user_uuid#124, tag_id#101L AS key#125L]
: +- *(2) BroadcastHashJoin [secondary_id#60], [secondary_id#100], Inner, BuildRight
: :- *(2) Filter (isnotnull(secondary_id#60) && isnotnull(id#0))
: : +- InMemoryTableScan [secondary_id#60, id#0], [isnotnull(secondary_id#60), isnotnull(id#0)]
: : +- InMemoryRelation [secondary_id#60, id#0], StorageLevel(disk, memory, deserialized, 1 replicas)
: : +- *(7) Project [secondary_id#60, id#0]
: : +- Generate explode(split(secondary_ids#1, \|)), [id#0], false, [secondary_id#60]
: : +- *(6) Project [id#0, secondary_ids#1]
: : +- *(6) SortMergeJoin [id#0], [guid#46], Inner
: : :- *(2) Sort [id#0 ASC NULLS FIRST], false, 0
: : : +- Exchange hashpartitioning(id#0, 200)
: : : +- *(1) Filter (isnotnull(id#0) && id#0 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12})
: : : +- InMemoryTableScan [id#0, secondary_ids#1], [isnotnull(id#0), id#0 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}]
: : : +- InMemoryRelation [id#0, secondary_ids#1], StorageLevel(disk, memory, deserialized, 1 replicas)
: : : +- Exchange RoundRobinPartitioning(3840)
: : : +- *(1) Filter AtLeastNNulls(n, id#0,secondary_ids#1)
: : : +- *(1) FileScan csv [id#0,secondary_ids#1] Batched: false, Format: CSV, Location: InMemoryFileIndex[inputdata_file, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,secondary_ids:string>
: : +- *(5) Sort [guid#46 ASC NULLS FIRST], false, 0
: : +- Exchange hashpartitioning(guid#46, 200)
: : +- *(4) Filter (guid#46 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12} && isnotnull(guid#46))
: : +- Generate explode(set_guid#36), false, [guid#46]
: : +- *(3) Project [set_guid#36]
: : +- *(3) Filter (isnotnull(allowed#39) && (allowed#39 = 1))
: : +- *(3) FileScan orc whitelist.whitelist1[set_guid#36,region#39,timestamp#43] Batched: false, Format: ORC, Location: PrunedInMemoryFileIndex[hdfs://file, PartitionCount: 1, PartitionFilters: [isnotnull(timestamp#43), (timestamp#43 = 18567)], PushedFilters: [IsNotNull(region), EqualTo(region,1)], ReadSchema: struct<set_guid:array<string>,region:int>
: +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true]))
FROM TAG as T
JOIN MAP as M
ON T.tag_id = M.tag_id
WHERE (expire >= NOW() OR expire IS NULL)
ORDER BY T.tag_id) AS subset) [numPartitions=1] [secondary_id#100,tag_id#101L] PushedFilters: [*IsNotNull(secondary_id), *IsNotNull(tag_id)], ReadSchema: struct<secondary_id:string,tag_id:bigint>
+- *(5) Sort [user_uuid#129 ASC NULLS FIRST, cast(key#131 as bigint) ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(user_uuid#129, cast(key#131 as bigint), 200)
+- *(4) Project [user_uuid#129, key#131]
+- *(4) Scan org.apache.spark.sql.cassandra.CassandraSourceRelation [user_uuid#129,key#131] PushedFilters: [*EqualTo(type,2)], ReadSchema: struct<user_uuid:string,key:string>
Im not getting the direct joins working which is causing time outs.
Update 2
I think this isnt resolving to direct joins as my datatypes in the dataframes are off. Specifically the uuid type
Instead of using RDD API with PySpark, I suggest to take Spark Cassandra Connector (SCC) 2.5.x or 3.0.x (release announcement) that contain the implementation of the join of Dataframe with Cassandra - in this case you won't need to go down to RDDs, but just use normal Dataframe API joins.
Please note that this is not enabled by default, so you will need to start your pyspark or spark-submit with special configuration, like this:
pyspark --packages com.datastax.spark:spark-cassandra-connector_2.11:2.5.1 \
--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
You can find more about joins with Cassandra in my recent blog post on this topic (although it uses Scala, Dataframe part should be translated almost one to one to PySpark)
I have mulitple large dataframes(around 30GB) called as and bs, a relatively small dataframe(around 500MB ~ 1GB) called spp.
I tried to cache spp into memory in order to avoid reading data from database or files multiple times.
But I find if I cache spp, the physical plan shows it won't use broadcast join even though spp is enclosed by broadcast function.
However, If I unpersist the spp, the plan shows it uses broadcast join.
Anyone familiar with this?
scala> spp.cache
res38: spp.type = [id: bigint, idPartner: int ... 41 more fields]
scala> val as = acs.join(broadcast(spp), $"idsegment" === $"idAdnetProductSegment")
as: org.apache.spark.sql.DataFrame = [idsegmentpartner: bigint, ssegmentsource: string ... 44 more fields]
scala> as.explain
== Physical Plan ==
*SortMergeJoin [idsegment#286L], [idAdnetProductSegment#91L], Inner
:- *Sort [idsegment#286L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(idsegment#286L, 200)
: +- *Filter isnotnull(idsegment#286L)
: +- HiveTableScan [idsegmentpartner#282L, ssegmentsource#287, idsegment#286L], CatalogRelation `default`.`tblcustomsegmentcore`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [idcustomsegment#281L, idsegmentpartner#282L, ssegmentpartner#283, skey#284, svalue#285, idsegment#286L, ssegmentsource#287, datecreate#288]
+- *Sort [idAdnetProductSegment#91L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(idAdnetProductSegment#91L, 200)
+- *Filter isnotnull(idAdnetProductSegment#91L)
+- InMemoryTableScan [id#87L, idPartner#88, idSegmentPartner#89, sSegmentSourceArray#90, idAdnetProductSegment#91L, idPartnerProduct#92L, idFeed#93, idGlobalProduct#94, sBrand#95, sSku#96, sOnlineID#97, sGTIN#98, sProductCategory#99, sAvailability#100, sCondition#101, sDescription#102, sImageLink#103, sLink#104, sTitle#105, sMPN#106, sPrice#107, sAgeGroup#108, sColor#109, dateExpiration#110, sGender#111, sItemGroupId#112, sGoogleProductCategory#113, sMaterial#114, sPattern#115, sProductType#116, sSalePrice#117, sSalePriceEffectiveDate#118, sShipping#119, sShippingWeight#120, sShippingSize#121, sUnmappedAttributeList#122, sStatus#123, createdBy#124, updatedBy#125, dateCreate#126, dateUpdated#127, sProductKeyName#128, sProductKeyValue#129], [isnotnull(idAdnetProductSegment#91L)]
+- InMemoryRelation [id#87L, idPartner#88, idSegmentPartner#89, sSegmentSourceArray#90, idAdnetProductSegment#91L, idPartnerProduct#92L, idFeed#93, idGlobalProduct#94, sBrand#95, sSku#96, sOnlineID#97, sGTIN#98, sProductCategory#99, sAvailability#100, sCondition#101, sDescription#102, sImageLink#103, sLink#104, sTitle#105, sMPN#106, sPrice#107, sAgeGroup#108, sColor#109, dateExpiration#110, sGender#111, sItemGroupId#112, sGoogleProductCategory#113, sMaterial#114, sPattern#115, sProductType#116, sSalePrice#117, sSalePriceEffectiveDate#118, sShipping#119, sShippingWeight#120, sShippingSize#121, sUnmappedAttributeList#122, sStatus#123, createdBy#124, updatedBy#125, dateCreate#126, dateUpdated#127, sProductKeyName#128, sProductKeyValue#129], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- *Scan JDBCRelation(tblSegmentPartnerProduct) [numPartitions=1] [id#87L,idPartner#88,idSegmentPartner#89,sSegmentSourceArray#90,idAdnetProductSegment#91L,idPartnerProduct#92L,idFeed#93,idGlobalProduct#94,sBrand#95,sSku#96,sOnlineID#97,sGTIN#98,sProductCategory#99,sAvailability#100,sCondition#101,sDescription#102,sImageLink#103,sLink#104,sTitle#105,sMPN#106,sPrice#107,sAgeGroup#108,sColor#109,dateExpiration#110,sGender#111,sItemGroupId#112,sGoogleProductCategory#113,sMaterial#114,sPattern#115,sProductType#116,sSalePrice#117,sSalePriceEffectiveDate#118,sShipping#119,sShippingWeight#120,sShippingSize#121,sUnmappedAttributeList#122,sStatus#123,createdBy#124,updatedBy#125,dateCreate#126,dateUpdated#127,sProductKeyName#128,sProductKeyValue#129] ReadSchema: struct<id:bigint,idPartner:int,idSegmentPartner:int,sSegmentSourceArray:string,idAdnetProductSegm...
scala> spp.unpersist
res40: spp.type = [id: bigint, idPartner: int ... 41 more fields]
scala> as.explain
== Physical Plan ==
*BroadcastHashJoin [idsegment#286L], [idAdnetProductSegment#91L], Inner, BuildRight
:- *Filter isnotnull(idsegment#286L)
: +- HiveTableScan [idsegmentpartner#282L, ssegmentsource#287, idsegment#286L], CatalogRelation `default`.`tblcustomsegmentcore`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [idcustomsegment#281L, idsegmentpartner#282L, ssegmentpartner#283, skey#284, svalue#285, idsegment#286L, ssegmentsource#287, datecreate#288]
+- BroadcastExchange HashedRelationBroadcastMode(List(input[4, bigint, true]))
+- *Scan JDBCRelation(tblSegmentPartnerProduct) [numPartitions=1] [id#87L,idPartner#88,idSegmentPartner#89,sSegmentSourceArray#90,idAdnetProductSegment#91L,idPartnerProduct#92L,idFeed#93,idGlobalProduct#94,sBrand#95,sSku#96,sOnlineID#97,sGTIN#98,sProductCategory#99,sAvailability#100,sCondition#101,sDescription#102,sImageLink#103,sLink#104,sTitle#105,sMPN#106,sPrice#107,sAgeGroup#108,sColor#109,dateExpiration#110,sGender#111,sItemGroupId#112,sGoogleProductCategory#113,sMaterial#114,sPattern#115,sProductType#116,sSalePrice#117,sSalePriceEffectiveDate#118,sShipping#119,sShippingWeight#120,sShippingSize#121,sUnmappedAttributeList#122,sStatus#123,createdBy#124,updatedBy#125,dateCreate#126,dateUpdated#127,sProductKeyName#128,sProductKeyValue#129] PushedFilters: [*IsNotNull(idAdnetProductSegment)], ReadSchema: struct<id:bigint,idPartner:int,idSegmentPartner:int,sSegmentSourceArray:string,idAdnetProductSegm...
This happens when the Analyzed plan tries to use the cache data. It swallows the ResolvedHint information supplied by the user(code).
If we try to do a df.explain(true), we will see that hint is lost between Analyzed and optimized plan, which is where Spark tries to use the cached data.
This issue has been fixed in the latest version of Spark(in multiple attempts).
latest jira: https://issues.apache.org/jira/browse/SPARK-27674 .
Code where the fix(to consider the hint when using cached tables) : https://github.com/apache/spark/blame/master/sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala#L219
I have a question about Spark DataFrame partitioning, I'm currently using Spark 1.6 for project requirements.This is my code excerpt:
sqlContext.getConf("spark.sql.shuffle.partitions") // 6
val df = sc.parallelize(List(("A",1),("A",4),("A",2),("B",5),("C",2),("D",2),("E",2),("B",7),("C",9),("D",1))).toDF("id_1","val_1")
df.rdd.getNumPartitions // 4
val df2 = sc.parallelize(List(("B",1),("E",4),("H",2),("J",5),("C",2),("D",2),("F",2))).toDF("id_2","val_2")
df2.rdd.getNumPartitions // 4
val df3 = df.join(df2,$"id_1" === $"id_2")
df3.rdd.getNumPartitions // 6
val df4 = df3.repartition(3,$"id_1")
df4.rdd.getNumPartitions // 3
df4.explain(true)
The following is the explain plan has been created:
== Parsed Logical Plan ==
'RepartitionByExpression ['id_1], Some(3)
+- Join Inner, Some((id_1#42 = id_2#46))
:- Project [_1#40 AS id_1#42,_2#41 AS val_1#43]
: +- LogicalRDD [_1#40,_2#41], MapPartitionsRDD[169] at rddToDataFrameHolder at <console>:26
+- Project [_1#44 AS id_2#46,_2#45 AS val_2#47]
+- LogicalRDD [_1#44,_2#45], MapPartitionsRDD[173] at rddToDataFrameHolder at <console>:26
== Analyzed Logical Plan ==
id_1: string, val_1: int, id_2: string, val_2: int
RepartitionByExpression [id_1#42], Some(3)
+- Join Inner, Some((id_1#42 = id_2#46))
:- Project [_1#40 AS id_1#42,_2#41 AS val_1#43]
: +- LogicalRDD [_1#40,_2#41], MapPartitionsRDD[169] at rddToDataFrameHolder at <console>:26
+- Project [_1#44 AS id_2#46,_2#45 AS val_2#47]
+- LogicalRDD [_1#44,_2#45], MapPartitionsRDD[173] at rddToDataFrameHolder at <console>:26
== Optimized Logical Plan ==
RepartitionByExpression [id_1#42], Some(3)
+- Join Inner, Some((id_1#42 = id_2#46))
:- Project [_1#40 AS id_1#42,_2#41 AS val_1#43]
: +- LogicalRDD [_1#40,_2#41], MapPartitionsRDD[169] at rddToDataFrameHolder at <console>:26
+- Project [_1#44 AS id_2#46,_2#45 AS val_2#47]
+- LogicalRDD [_1#44,_2#45], MapPartitionsRDD[173] at rddToDataFrameHolder at <console>:26
== Physical Plan ==
TungstenExchange hashpartitioning(id_1#42,3), None
+- SortMergeJoin [id_1#42], [id_2#46]
:- Sort [id_1#42 ASC], false, 0
: +- TungstenExchange hashpartitioning(id_1#42,6), None
: +- Project [_1#40 AS id_1#42,_2#41 AS val_1#43]
: +- Scan ExistingRDD[_1#40,_2#41]
+- Sort [id_2#46 ASC], false, 0
+- TungstenExchange hashpartitioning(id_2#46,6), None
+- Project [_1#44 AS id_2#46,_2#45 AS val_2#47]
+- Scan ExistingRDD[_1#44,_2#45]
As far I know, DataFrame represent an abstraction interface over RDD, so partitioning should be delegated to the Catalyst optimizer.
Infact compared to RDD where many transformations accept a number of partitions parameter, in order to optimize co-partitioning and co-locating whenever possible, with DataFrame the only chance to alter partitioning, is invoking the method repartition, otherwise the number of partitions for join and aggregations is inferred using the configuration param spark.sql.shuffle.partitions.
From what I can see and understand from the explain plan above it seems there is an useless repartition(so shuffle indeed) to 6 (the default value) after then repartitioning again to the final value imposed by the method repartition.
I believe the Optimizer could change the number of partitions of the join to the final value of 3.
Could someone help me to clarify that point? Maybe I missing something.
If you use spark sql, your shuffle partitions is always equal to spark.sql.shufle.partitions.But if you enable this spark.sql.adaptive.enabled it will add EchangeCoordinator.Right now, the work of this coordinator is to determine the number of post-shuffle partitions for a stage that needs to fetch shuffle data from one or multiple stages.
I have written an explicitJoin API which renames the columns in a Dataset with either a l_ or r_ prefix to disambiguate and to solve problems with spark lineage, i.e columnName1#77 not found in columnName1#123, columnName2#55....
Part of the code is shown below:
def explicitJoin(other: Dataset[_], joinExpr: Column, joinType: String): ExplicitJoinExt = {
val left = dataset.toDF(dataset.columns.map("l_" + _): _*)
val right = other.toDF(other.columns.map("r_" + _): _*)
new ExplicitJoinExt(left.join(right, joinExpr, joinType))
}
Users may then pass a join expressions such as $"l_columnName1" === $"r_columnName1" && ... so that they are 100% explicit about what columns they are joining on.
I am experiencing a new issue where partitions are too large to load into memory (org.apache.spark.shuffle.FetchFailedException: Too large frame....) yet there was no problem reading the input (partitioned) Datasets.
Can renaming columns affect the underlying parititioning of the input Datasets/DataFrames?
EDIT
Example 1 - regular join
case class A(a: Int, b: String)
val l = (0 to 1000000).map(i => A(i, i.toString))
val r = (0 to 1000000).map(i => A(i, i.toString))
val ds1 = l.toDF.as[A].repartition(100, $"a")
val ds2 = r.toDF.as[A].repartition(100, $"a")
val joined = ds1.join(ds2, Seq("a"), "inner")
joined.explain
== Physical Plan ==
*Project [a#2, b#3, b#15]
+- *SortMergeJoin [a#2], [a#14], Inner
:- *Sort [a#2 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(a#2, 100)
: +- LocalTableScan [a#2, b#3]
+- *Sort [a#14 ASC NULLS FIRST], false, 0
+- ReusedExchange [a#14, b#15], Exchange hashpartitioning(a#2, 100)
Example 2 - Using my (possibly misguided) ExplicitJoinExt involving renames
val joined = ds1
.explicitJoin(ds2, $"l_a" === $"r_a", "inner") // Pimped on conversion to ExplicitJoin type, columns prefixed by l_ or r_. DS joined by expr and join type
.selectLeft // Select just left prefixed columns
.toDF // Convert back from ExplicitJoinExpr to DF
.as[A]
joined.explain
== Physical Plan ==
*Project [l_a#24 AS a#53, l_b#25 AS b#54]
+- *BroadcastHashJoin [l_a#24], [r_a#29], Inner, BuildRight
:- *Project [a#2 AS l_a#24, b#3 AS l_b#25]
: +- Exchange hashpartitioning(a#2, 100)
: +- LocalTableScan [a#2, b#3]
+- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
+- *Project [a#14 AS r_a#29]
+- Exchange hashpartitioning(a#14, 100)
+- LocalTableScan [a#14]
So, for the second join it would apear that we are repartitioning again - correct?
NO, I checked in SPARK 2.3.1. Renaming does not affect partitioning, at least not in this approach:
val ds11 = ds1.repartition(4)
NO, I checked this also. Renaming does not affect partitioning, at least not in this approach:
val ds11 = ds1.repartition(2, $"cityid")
EXPLAIN Output for:
val j = left.join(right, $"l_personid" === $"r_personid", "inner").explain
​reveals, in my case 2 and 4 as number of partitions:
== Physical Plan ==
*(2) BroadcastHashJoin [l_personid#641], [r_personid#647], Inner,
BuildRight, false
:- *(2) Project [personid#612 AS l_personid#641, personname#613 AS
l_personname#642, cityid#614 AS l_cityid#643]
: +- Exchange hashpartitioning(cityid#614, 2)
: +- LocalTableScan [personid#612, personname#613, cityid#614]
+- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
+- *(1) Project [personid#612 AS r_personid#647, personname#613 AS r_personname#648, cityid#614 AS r_cityid#649]
+- Exchange hashpartitioning(personid#612, 4)
+- LocalTableScan [personid#612, personname#613, cityid#614]
One can see that the renamed cols are mapped back to their original names.
In a test on a post elsewhere we were able to ascertain that new actions relying on AGGRegations or JOINs will default to 200 unless
sqlContext.setConf("spark.sql.shuffle.partitions", "some val")
is issued in the code setting this to the required value. If it is a small set of data being JOINed, etc. then the results may differ.
For those still encountering this issue: renaming columns does affect partitioning in Spark < 3.0.
Seq((1, 2))
.toDF("a", "b")
.repartition($"b")
.withColumnRenamed("b", "c")
.repartition($"c")
.explain()
Gives the following plan:
== Physical Plan ==
Exchange hashpartitioning(c#40, 10)
+- *(1) Project [a#36, b#37 AS c#40]
+- Exchange hashpartitioning(b#37, 10)
+- LocalTableScan [a#36, b#37]
This was fixed in this PR.
When I run
spark.sql("select bill_no, count(icode) from bigmart.o_sales group by bill_no").explain(true);
I get only this much explaination
== Parsed Logical Plan ==
'Aggregate ['bill_no], ['bill_no AS bill#0, 'count('icode) AS icode#1]
+- 'UnresolvedRelation `bigmart`.`o_sales`
== Analyzed Logical Plan ==
bill: string, icode: bigint
Aggregate [bill_no#15], [bill_no#15 AS bill#0, count(icode#12) AS icode#1L]
+- MetastoreRelation bigmart, o_sales
== Optimized Logical Plan ==
Aggregate [bill_no#15], [bill_no#15 AS bill#0, count(icode#12) AS icode#1L]
+- Project [icode#12, bill_no#15]
+- MetastoreRelation bigmart, o_sales
== Physical Plan ==
*HashAggregate(keys=[bill_no#15], functions=[count(icode#12)], output=[bill#0, icode#1L])
+- Exchange hashpartitioning(bill_no#15, 200)
+- *HashAggregate(keys=[bill_no#15], functions=[partial_count(icode#12)], output=[bill_no#15, count#19L])
+- HiveTableScan [icode#12, bill_no#15], MetastoreRelation bigmart, o_sales
Is it all explain() can offer? or is there other methods that gives more details. I want to learn how map and reduce is done behind the scene by spark.