Efficiently caching data frames in Spark SQL - apache-spark

The use-case is to self-join a table multiple times.
// Hive Table
val network_file = spark.sqlContext.sql("SELECT * FROM
test.network_file")
// Cache
network_file.cache()
network_file.createOrReplaceTempView("network_design")
Now the following query does self-join multiple times.
val res = spark.sqlContext.sql("""select
one.sourcehub as source,
one.mappedhub as first_leg,
two.mappedhub as second_leg,
one.destinationhub as dest
from
(select * from network_design) one JOIN
(select * from network_design) two JOIN
(select * from network_design) three
ON (two.sourcehub = one.mappedhub )
AND (three.sourcehub = two.mappedhub)
AND (one.destinationhub = two.destinationhub )
AND (two.destinationhub = three.destinationhub)
group by source, first_leg, second_leg, dest
""")
Problem is that the Physical Plan of above query suggests on reading the table three times.
== Physical Plan ==
*HashAggregate(keys=[sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84], functions=[])
+- Exchange hashpartitioning(sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84, 200)
+- *HashAggregate(keys=[sourcehub#83, mappedhub#85, mappedhub#109, destinationhub#84], functions=[])
+- *Project [sourcehub#83, destinationhub#84, mappedhub#85, mappedhub#109]
+- *BroadcastHashJoin [mappedhub#109, destinationhub#108], [sourcehub#110, destinationhub#111], Inner, BuildRight
:- *Project [sourcehub#83, destinationhub#84, mappedhub#85, destinationhub#108, mappedhub#109]
: +- *BroadcastHashJoin [mappedhub#85, destinationhub#84], [sourcehub#107, destinationhub#108], Inner, BuildRight
: :- *Filter (isnotnull(destinationhub#84) && isnotnull(mappedhub#85))
: : +- InMemoryTableScan [sourcehub#83, destinationhub#84, mappedhub#85], [isnotnull(destinationhub#84), isnotnull(mappedhub#85)]
: : +- InMemoryRelation [sourcehub#83, destinationhub#84, mappedhub#85], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
: : +- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]
: +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false], input[1, string, false]))
: +- *Filter ((isnotnull(sourcehub#107) && isnotnull(destinationhub#108)) && isnotnull(mappedhub#109))
: +- InMemoryTableScan [sourcehub#107, destinationhub#108, mappedhub#109], [isnotnull(sourcehub#107), isnotnull(destinationhub#108), isnotnull(mappedhub#109)]
: +- InMemoryRelation [sourcehub#107, destinationhub#108, mappedhub#109], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
: +- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false], input[1, string, false]))
+- *Filter (isnotnull(sourcehub#110) && isnotnull(destinationhub#111))
+- InMemoryTableScan [sourcehub#110, destinationhub#111], [isnotnull(sourcehub#110), isnotnull(destinationhub#111)]
+- InMemoryRelation [sourcehub#110, destinationhub#111, mappedhub#112], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- HiveTableScan [sourcehub#0, destinationhub#1, mappedhub#2], HiveTableRelation `test`.`network_file`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [sourcehub#0, destinationhub#1, mappedhub#2]
Shouldn't the Spark cache the table once and not read it multiple times?
How can we efficiently cache tables in spark for these self-join cases?
Spark Version - 2.2
Hive ORC is the store downstream.

This sequence of statements ignores the data frame that is to be cached:
network_file.cache() #the result of this is not being used at all
network_file.createOrReplaceTempView("network_design") #doesn't have the cached DF in lineage
You should either overwrite the variable or register the table on the returned data frame:
network_file = network_file.cache()
network_file.createOrReplaceTempView("network_design")
Or:
network_file.cache().createOrReplaceTempView("network_design")

Related

pyspark joinWithCassandraTable refactor without maps

Im new to using spark/scala here and im having trouble with a refactor of some of my code here. Im running Scala 2.11 using pyspark and in a spark/yarn setup. The following is working but id like to clean it up, and to get the max performance out of this. I read elsewhere that pyspark udf and lambdas can cause huge performance impact so im trying to reduce or remove them were possible.
# Reduce ingest df1 data by joining on allowed table df2
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.rdd\
.map(lambda r: Row(tag=r['tag_id'], user_uuid=r['user_uuid']))
# Type column fixed to type=2, and tag==key
ready_to_join = to_process.map(lambda r: (r[0], 2, r[1]))
# Join with cassandra table to find matches
exists_in_cass = ready_to_join\
.joinWithCassandraTable(keyspace, table3)\
.on("user_uuid", "type")\
.select("user_uuid")
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
the cassandra table is such that
CREATE TABLE keyspace.table3 (
user_uuid uuid,
type int,
key text,
value text,
PRIMARY KEY (user_uuid, type, key)
) WITH CLUSTERING ORDER BY (type ASC, key ASC)
currently ive got
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.select(col("user_uuid"), col("tag_id").alias("tag"))
ready_to_join = to_process\
.withColumn("type", sf.lit(2))\
.select('user_uuid', 'type', col('tag').alias("key"))\
.rdd\
.map(lambda x: Row(x))
# planning on using repartitionByCassandraReplica here after I get it logically working
exists_in_cass = ready_to_join\
.joinWithCassandraTable(keyspace, table3)\
.on("user_uuid", "type")\
.select("user_uuid")
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
but im getting errors like
2020-10-30 15:10:42 WARN TaskSetManager:66 - Lost task 148.0 in stage 22.0 (TID ----, ---, executor 9): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
looking help from any spark gurus out there to point me to anything stupid I am doing here.
Update
Thanks to Alex's suggestion using the spark-cassandra-connector v2.5+ gives the ability for dataframes to join directly. I updated my code to use this instead.
to_process = df2\
.join(
sf.broadcast(df1),
df2.secondary_id == df1.secondary_id,
how="inner")\
.select(col("user_uuid"), col("tag_id").alias("tag"))
ready_to_join = to_process\
.withColumn("type", sf.lit(2))\
.select(col('user_uuid').alias('c1_user_uuid'), 'type', col('tag').alias("key"))\
cass_table = spark_session
.read \
.format("org.apache.spark.sql.cassandra") \
.options(table=config.table, keyspace=config.keyspace) \
.load()
exists_in_cass = ready_to_join\
.join(
cass_table,
[(cass_table["user_uuid"] == ready_to_join["c1_user_uuid"]) &
(cass_table["key"] == ready_to_join["key"]) &
(cass_table["type"] == ready_to_join["type"])])\
.select(col("c1_user_uuid").alias("user_uuid"))
exists_in_cass.explain()
log.error(f"TEST PRINT - [{exists_in_cass.count()}]")
As far as I know, in theory this should be alot faster ! But im getting errors during runtime with the database timing out.
WARN TaskSetManager:66 - Lost task 827.0 in stage 12.0 (TID 9946, , executor 4): java.io.IOException: Exception during execution of SELECT "user_uuid", "key" FROM "keyspace"."table3" WHERE token("user_uuid") > ? AND token("user_uuid") <= ? AND "type" = ? ALLOW FILTERING: Query timed out after PT2M
TaskSetManager:66 - Lost task 125.0 in stage 12.0 (TID 9215, , executor 7): com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2M
etc
I have the config for spark setup to allow for the spark extensions
--packages mysql:mysql-connector-java:5.1.47,com.datastax.spark:spark-cassandra-connector_2.11:2.5.1 \
--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions \
The DAG from spark shows all nodes completely maxed out. Should I be partitioning my data before running my join here?
The explain for this also doesnt show a direct join (explain has more code than snippet above)
== Physical Plan ==
*(6) Project [c1_user_uuid#124 AS user_uuid#158]
+- *(6) SortMergeJoin [c1_user_uuid#124, key#125L], [user_uuid#129, cast(key#131 as bigint)], Inner
:- *(3) Sort [c1_user_uuid#124 ASC NULLS FIRST, key#125L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(c1_user_uuid#124, key#125L, 200)
: +- *(2) Project [id#0 AS c1_user_uuid#124, tag_id#101L AS key#125L]
: +- *(2) BroadcastHashJoin [secondary_id#60], [secondary_id#100], Inner, BuildRight
: :- *(2) Filter (isnotnull(secondary_id#60) && isnotnull(id#0))
: : +- InMemoryTableScan [secondary_id#60, id#0], [isnotnull(secondary_id#60), isnotnull(id#0)]
: : +- InMemoryRelation [secondary_id#60, id#0], StorageLevel(disk, memory, deserialized, 1 replicas)
: : +- *(7) Project [secondary_id#60, id#0]
: : +- Generate explode(split(secondary_ids#1, \|)), [id#0], false, [secondary_id#60]
: : +- *(6) Project [id#0, secondary_ids#1]
: : +- *(6) SortMergeJoin [id#0], [guid#46], Inner
: : :- *(2) Sort [id#0 ASC NULLS FIRST], false, 0
: : : +- Exchange hashpartitioning(id#0, 200)
: : : +- *(1) Filter (isnotnull(id#0) && id#0 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12})
: : : +- InMemoryTableScan [id#0, secondary_ids#1], [isnotnull(id#0), id#0 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}]
: : : +- InMemoryRelation [id#0, secondary_ids#1], StorageLevel(disk, memory, deserialized, 1 replicas)
: : : +- Exchange RoundRobinPartitioning(3840)
: : : +- *(1) Filter AtLeastNNulls(n, id#0,secondary_ids#1)
: : : +- *(1) FileScan csv [id#0,secondary_ids#1] Batched: false, Format: CSV, Location: InMemoryFileIndex[inputdata_file, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,secondary_ids:string>
: : +- *(5) Sort [guid#46 ASC NULLS FIRST], false, 0
: : +- Exchange hashpartitioning(guid#46, 200)
: : +- *(4) Filter (guid#46 RLIKE [0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12} && isnotnull(guid#46))
: : +- Generate explode(set_guid#36), false, [guid#46]
: : +- *(3) Project [set_guid#36]
: : +- *(3) Filter (isnotnull(allowed#39) && (allowed#39 = 1))
: : +- *(3) FileScan orc whitelist.whitelist1[set_guid#36,region#39,timestamp#43] Batched: false, Format: ORC, Location: PrunedInMemoryFileIndex[hdfs://file, PartitionCount: 1, PartitionFilters: [isnotnull(timestamp#43), (timestamp#43 = 18567)], PushedFilters: [IsNotNull(region), EqualTo(region,1)], ReadSchema: struct<set_guid:array<string>,region:int>
: +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true]))
FROM TAG as T
JOIN MAP as M
ON T.tag_id = M.tag_id
WHERE (expire >= NOW() OR expire IS NULL)
ORDER BY T.tag_id) AS subset) [numPartitions=1] [secondary_id#100,tag_id#101L] PushedFilters: [*IsNotNull(secondary_id), *IsNotNull(tag_id)], ReadSchema: struct<secondary_id:string,tag_id:bigint>
+- *(5) Sort [user_uuid#129 ASC NULLS FIRST, cast(key#131 as bigint) ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(user_uuid#129, cast(key#131 as bigint), 200)
+- *(4) Project [user_uuid#129, key#131]
+- *(4) Scan org.apache.spark.sql.cassandra.CassandraSourceRelation [user_uuid#129,key#131] PushedFilters: [*EqualTo(type,2)], ReadSchema: struct<user_uuid:string,key:string>
Im not getting the direct joins working which is causing time outs.
Update 2
I think this isnt resolving to direct joins as my datatypes in the dataframes are off. Specifically the uuid type
Instead of using RDD API with PySpark, I suggest to take Spark Cassandra Connector (SCC) 2.5.x or 3.0.x (release announcement) that contain the implementation of the join of Dataframe with Cassandra - in this case you won't need to go down to RDDs, but just use normal Dataframe API joins.
Please note that this is not enabled by default, so you will need to start your pyspark or spark-submit with special configuration, like this:
pyspark --packages com.datastax.spark:spark-cassandra-connector_2.11:2.5.1 \
--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
You can find more about joins with Cassandra in my recent blog post on this topic (although it uses Scala, Dataframe part should be translated almost one to one to PySpark)

How to filter rows where key is not present in a large dataframe

Suppose I have a streaming dataframe A and a large static dataframe B. Assume that typically A is of size < 10000 records. However, B is a much larger dataframe with size in the range of millions.
Lets assume both A and B have a 'key' column. I want to filter rows in A where A.key is not present in B. What is the best way to achieve this.
Right now, I have tried A.join(B, Seq("key"), "left_anti"). However, the performance is not upto the mark. Is there anyway I can fasten up the process
Physical plan:
== Physical Plan ==
SortMergeJoin [domainName#461], [domain#147], LeftAnti
:- *(5) Sort [domainName#461 ASC NULLS FIRST], false, 0
: +- StreamingDeduplicate [domainName#461], state info [ checkpoint = hdfs://MTPrime-CO4-fed/MTPrime-CO4-0/projects/BingAdsAdQuality/Test/WhoIs/WhoIsStream/checkPoint/state, runId = 9d09398b-efda-41cb-ab77-1b5550cd5da9, opId = 0, ver = 63, numPartitions = 400], 0
: +- Exchange hashpartitioning(domainName#461, 400)
: +- Union
: :- *(2) Project [value#460 AS domainName#461]
: : +- *(2) Filter isnotnull(value#460)
: : +- *(2) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#460]
: : +- MapPartitions <function1>, obj#459: java.lang.String
: : +- MapPartitions <function1>, obj#436: MTInterfaces.Fraud.RiskEntity
: : +- DeserializeToObject newInstance(class scala.Tuple3), obj#435: scala.Tuple3
: : +- Exchange RoundRobinPartitioning(600)
: : +- *(1) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple3, true])._1, true, false) AS _1#142, staticinvoke(class org.apache.spark.sql.catalyst.util.DateTimeUtils$, TimestampType, fromJavaTimestamp, assertnotnull(input[0, scala.Tuple3, true])._2, true, false) AS _2#143, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple3, true])._3, true, false) AS _3#144]
: : +- *(1) MapElements <function1>, obj#141: scala.Tuple3
: : +- *(1) MapElements <function1>, obj#132: scala.Tuple3
: : +- *(1) DeserializeToObject createexternalrow(Body#60.toString, staticinvoke(class org.apache.spark.sql.catalyst.util.DateTimeUtils$, ObjectType(class java.sql.Timestamp), toJavaTimestamp, EventTime#37, true, false), Timestamp#48L, Offset#27L, Partition#72.toString, PartitionKey#84.toString, Publisher#96.toString, SequenceNumber#108L, StructField(Body,StringType,true), StructField(EventTime,TimestampType,true), StructField(Timestamp,LongType,true), StructField(Offset,LongType,true), StructField(Partition,StringType,true), StructField(PartitionKey,StringType,true), StructField(Publisher,StringType,true), StructField(SequenceNumber,LongType,true)), obj#131: org.apache.spark.sql.Row
: : +- *(1) Project [cast(body#608 as string) AS Body#60, enqueuedTime#612 AS EventTime#37, cast(enqueuedTime#612 as bigint) AS Timestamp#48L, cast(offset#610 as bigint) AS Offset#27L, partition#609 AS Partition#72, partitionKey#614 AS PartitionKey#84, publisher#613 AS Publisher#96, sequenceNumber#611L AS SequenceNumber#108L]
: : +- Scan ExistingRDD[body#608,partition#609,offset#610,sequenceNumber#611L,enqueuedTime#612,publisher#613,partitionKey#614,properties#615,systemProperties#616]
: +- *(4) Project [value#453 AS domainName#455]
: +- *(4) Filter isnotnull(value#453)
: +- *(4) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#453]
: +- *(4) MapElements <function1>, obj#452: java.lang.String
: +- MapPartitions <function1>, obj#436: MTInterfaces.Fraud.RiskEntity
: +- DeserializeToObject newInstance(class scala.Tuple3), obj#435: scala.Tuple3
: +- ReusedExchange [_1#142, _2#143, _3#144], Exchange RoundRobinPartitioning(600)
+- *(8) Project [domain#147]
+- *(8) Filter (isnotnull(rank#284) && (rank#284 = 1))
+- Window [row_number() windowspecdefinition(domain#147, timestamp#151 DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS rank#284], [domain#147], [timestamp#151 DESC NULLS LAST]
+- *(7) Sort [domain#147 ASC NULLS FIRST, timestamp#151 DESC NULLS LAST], false, 0
+- Exchange hashpartitioning(domain#147, 400)
+- *(6) Project [domain#147, timestamp#151]
+- *(6) Filter isnotnull(domain#147)
+- *(6) FileScan csv [domain#147,timestamp#151] Batched: false, Format: CSV, Location: InMemoryFileIndex[hdfs://MTPrime-CO4-fed/MTPrime-CO4-0/projects/BingAdsAdQuality/Test/WhoIs], PartitionFilters: [], PushedFilters: [IsNotNull(domain)], ReadSchema: struct<domain:string,timestamp:string>
Snapshots of query graph:
EDIT
Right now I have moved the lookup data to a Cosmos DB store and created a TempView on top of it (say lookupdata). Now, I need to filter the ones that are not present in the store. I am exploring the following options:
1. create tempview on top of the streaming data as well and query
spark.sql(SELECT * FROM streamingdata s LEFT ANTI JOIN lookupdata l ON s.key = l.key")
Same as 1 but do inner sub-query instead of left anti join. i.e spark.sql("SELECT s.* FROM streamingdata s WHERE s.key NOT IN (SELECT key FROM lookupdata l)")
Retain the streaming df as it is and do a filter op:
df.filter(x => { val key = x.getAs[String])("key")
spark.sql("SELECT * FROM lookupdata l WHERE l.key = '"+key+"'").isEmpty
})
which one would work better?
Please try
from pyspark.sql.functions import broadcast
A.join(broadcast(B), Seq("key"), "left_anti")
It is not the recommended approach to do this with (Structured) Streaming. Imagine you are a Chinese company with 100M customers. How do you see that working on B with a 100M rows?
From my last assignment: If large dataset for reference data evident, use Hbase, or some other other key value store like Cassandra, with mapPartitions if volitatile or non-volatile. This is more difficult though. It was no easy task the data engineer, designer told me. Indeed, it is not that easy. But the way to go.

Apache Spark 2.2: broadcast join not working when you already cache the dataframe which you want to broadcast

I have mulitple large dataframes(around 30GB) called as and bs, a relatively small dataframe(around 500MB ~ 1GB) called spp.
I tried to cache spp into memory in order to avoid reading data from database or files multiple times.
But I find if I cache spp, the physical plan shows it won't use broadcast join even though spp is enclosed by broadcast function.
However, If I unpersist the spp, the plan shows it uses broadcast join.
Anyone familiar with this?
scala> spp.cache
res38: spp.type = [id: bigint, idPartner: int ... 41 more fields]
scala> val as = acs.join(broadcast(spp), $"idsegment" === $"idAdnetProductSegment")
as: org.apache.spark.sql.DataFrame = [idsegmentpartner: bigint, ssegmentsource: string ... 44 more fields]
scala> as.explain
== Physical Plan ==
*SortMergeJoin [idsegment#286L], [idAdnetProductSegment#91L], Inner
:- *Sort [idsegment#286L ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(idsegment#286L, 200)
: +- *Filter isnotnull(idsegment#286L)
: +- HiveTableScan [idsegmentpartner#282L, ssegmentsource#287, idsegment#286L], CatalogRelation `default`.`tblcustomsegmentcore`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [idcustomsegment#281L, idsegmentpartner#282L, ssegmentpartner#283, skey#284, svalue#285, idsegment#286L, ssegmentsource#287, datecreate#288]
+- *Sort [idAdnetProductSegment#91L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(idAdnetProductSegment#91L, 200)
+- *Filter isnotnull(idAdnetProductSegment#91L)
+- InMemoryTableScan [id#87L, idPartner#88, idSegmentPartner#89, sSegmentSourceArray#90, idAdnetProductSegment#91L, idPartnerProduct#92L, idFeed#93, idGlobalProduct#94, sBrand#95, sSku#96, sOnlineID#97, sGTIN#98, sProductCategory#99, sAvailability#100, sCondition#101, sDescription#102, sImageLink#103, sLink#104, sTitle#105, sMPN#106, sPrice#107, sAgeGroup#108, sColor#109, dateExpiration#110, sGender#111, sItemGroupId#112, sGoogleProductCategory#113, sMaterial#114, sPattern#115, sProductType#116, sSalePrice#117, sSalePriceEffectiveDate#118, sShipping#119, sShippingWeight#120, sShippingSize#121, sUnmappedAttributeList#122, sStatus#123, createdBy#124, updatedBy#125, dateCreate#126, dateUpdated#127, sProductKeyName#128, sProductKeyValue#129], [isnotnull(idAdnetProductSegment#91L)]
+- InMemoryRelation [id#87L, idPartner#88, idSegmentPartner#89, sSegmentSourceArray#90, idAdnetProductSegment#91L, idPartnerProduct#92L, idFeed#93, idGlobalProduct#94, sBrand#95, sSku#96, sOnlineID#97, sGTIN#98, sProductCategory#99, sAvailability#100, sCondition#101, sDescription#102, sImageLink#103, sLink#104, sTitle#105, sMPN#106, sPrice#107, sAgeGroup#108, sColor#109, dateExpiration#110, sGender#111, sItemGroupId#112, sGoogleProductCategory#113, sMaterial#114, sPattern#115, sProductType#116, sSalePrice#117, sSalePriceEffectiveDate#118, sShipping#119, sShippingWeight#120, sShippingSize#121, sUnmappedAttributeList#122, sStatus#123, createdBy#124, updatedBy#125, dateCreate#126, dateUpdated#127, sProductKeyName#128, sProductKeyValue#129], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- *Scan JDBCRelation(tblSegmentPartnerProduct) [numPartitions=1] [id#87L,idPartner#88,idSegmentPartner#89,sSegmentSourceArray#90,idAdnetProductSegment#91L,idPartnerProduct#92L,idFeed#93,idGlobalProduct#94,sBrand#95,sSku#96,sOnlineID#97,sGTIN#98,sProductCategory#99,sAvailability#100,sCondition#101,sDescription#102,sImageLink#103,sLink#104,sTitle#105,sMPN#106,sPrice#107,sAgeGroup#108,sColor#109,dateExpiration#110,sGender#111,sItemGroupId#112,sGoogleProductCategory#113,sMaterial#114,sPattern#115,sProductType#116,sSalePrice#117,sSalePriceEffectiveDate#118,sShipping#119,sShippingWeight#120,sShippingSize#121,sUnmappedAttributeList#122,sStatus#123,createdBy#124,updatedBy#125,dateCreate#126,dateUpdated#127,sProductKeyName#128,sProductKeyValue#129] ReadSchema: struct<id:bigint,idPartner:int,idSegmentPartner:int,sSegmentSourceArray:string,idAdnetProductSegm...
scala> spp.unpersist
res40: spp.type = [id: bigint, idPartner: int ... 41 more fields]
scala> as.explain
== Physical Plan ==
*BroadcastHashJoin [idsegment#286L], [idAdnetProductSegment#91L], Inner, BuildRight
:- *Filter isnotnull(idsegment#286L)
: +- HiveTableScan [idsegmentpartner#282L, ssegmentsource#287, idsegment#286L], CatalogRelation `default`.`tblcustomsegmentcore`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [idcustomsegment#281L, idsegmentpartner#282L, ssegmentpartner#283, skey#284, svalue#285, idsegment#286L, ssegmentsource#287, datecreate#288]
+- BroadcastExchange HashedRelationBroadcastMode(List(input[4, bigint, true]))
+- *Scan JDBCRelation(tblSegmentPartnerProduct) [numPartitions=1] [id#87L,idPartner#88,idSegmentPartner#89,sSegmentSourceArray#90,idAdnetProductSegment#91L,idPartnerProduct#92L,idFeed#93,idGlobalProduct#94,sBrand#95,sSku#96,sOnlineID#97,sGTIN#98,sProductCategory#99,sAvailability#100,sCondition#101,sDescription#102,sImageLink#103,sLink#104,sTitle#105,sMPN#106,sPrice#107,sAgeGroup#108,sColor#109,dateExpiration#110,sGender#111,sItemGroupId#112,sGoogleProductCategory#113,sMaterial#114,sPattern#115,sProductType#116,sSalePrice#117,sSalePriceEffectiveDate#118,sShipping#119,sShippingWeight#120,sShippingSize#121,sUnmappedAttributeList#122,sStatus#123,createdBy#124,updatedBy#125,dateCreate#126,dateUpdated#127,sProductKeyName#128,sProductKeyValue#129] PushedFilters: [*IsNotNull(idAdnetProductSegment)], ReadSchema: struct<id:bigint,idPartner:int,idSegmentPartner:int,sSegmentSourceArray:string,idAdnetProductSegm...
This happens when the Analyzed plan tries to use the cache data. It swallows the ResolvedHint information supplied by the user(code).
If we try to do a df.explain(true), we will see that hint is lost between Analyzed and optimized plan, which is where Spark tries to use the cached data.
This issue has been fixed in the latest version of Spark(in multiple attempts).
latest jira: https://issues.apache.org/jira/browse/SPARK-27674 .
Code where the fix(to consider the hint when using cached tables) : https://github.com/apache/spark/blame/master/sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala#L219

spark data frame left outer join is taking lot time

I have two dataframes ipwithCounryName(12Mb) and ipLogs(1GB) . I would like to join two data frames based on common column ipRange. ipwithCounryName df i brodcasted Below is my code.
val ipwithCounryName_df = Init.iptoCountryBC.value
ipwithCounryName_df .createOrReplaceTempView("inputTable")
ipLogs.createOrReplaceTempView("ipTable")
val joined_table= Init.getSparkSession.sql("SELECT hostname,date,path,status,content_size,inputTable.countryName FROM ipasLong Left JOIN inputTable ON ipasLongValue >= StartingRange AND ipasLongValue <= Endingrange")
=====Physical plan===
*Project [hostname#34, date#98, path#36, status#37, content_size#105L,
countryName#5]
+- BroadcastNestedLoopJoin BuildRight, Inner, ((ipasLongValue#354L >=
StartingRange#2L) && (ipasLongValue#354L <= Endingrange#3L))
:- *Project [UDF:IpToInt(hostname#34) AS IpasLongValue#354L, hostname#34,
date#98, path#36, status#37, content_size#105L]
: +- *Filter ((isnotnull(isIp#112) && isIp#112) &&
isnotnull(UDF:IpToInt(hostname#34)))
: +- InMemoryTableScan [path#36, content_size#105L, isIp#112,
hostname#34, date#98, status#37], [isnotnull(isIp#112), isIp#112,
isnotnull(UDF:IpToInt(hostname#34))]
: +- InMemoryRelation [hostname#34, date#98, path#36, status#37,
content_size#105L, isIp#112], true, 10000, StorageLevel(disk, memory,
deserialized, 1 replicas)
: +- *Project [hostname#34, cast(unix_timestamp(date#35,
dd/MMM/yyyy:HH:mm:ss ZZZZ, Some(Asia/Calcutta)) as timestamp) AS date#98,
path#36, status#37, CASE WHEN isnull(content_size#38L) THEN 0 ELSE
content_size#38L END AS content_size#105L, UDF(hostname#34) AS isIp#112]
: +- *Filter (isnotnull(isBadData#45) && NOT isBadData#45)
: +- InMemoryTableScan [isBadData#45, hostname#34,
status#37, path#36, date#35, content_size#38L], [isnotnull(isBadData#45), NOT
isBadData#45]
: +- InMemoryRelation [hostname#34, date#35,
path#36, status#37, content_size#38L, isBadData#45], true, 10000,
StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *Project [regexp_extract(val#26,
^([^\s]+\s), 1) AS hostname#34, regexp_extract(val#26, ^.*
(\d\d/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} -\d{4}), 1) AS date#35,
regexp_extract(val#26, ^.*"\w+\s+([^\s]+)\s*[(HTTP)]*.*", 1) AS path#36,
cast(regexp_extract(val#26, ^.*"\s+([^\s]+), 1) as int) AS status#37,
cast(regexp_extract(val#26, ^.*\s+(\d+)$, 1) as bigint) AS content_size#38L,
UDF(named_struct(hostname, regexp_extract(val#26, ^([^\s]+\s), 1), date,
regexp_extract(val#26, ^.*(\d\d/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} -\d{4}), 1),
path, regexp_extract(val#26, ^.*"\w+\s+([^\s]+)\s*[(HTTP)]*.*", 1), status,
cast(regexp_extract(val#26, ^.*"\s+([^\s]+), 1) as int), content_size,
cast(regexp_extract(val#26, ^.*\s+(\d+)$, 1) as bigint))) AS isBadData#45]
: +- *FileScan csv [val#26] Batched:
false, Format: CSV, Location:
InMemoryFileIndex[file:/C:/Users/M1047320/Desktop/access_log_Jul95],
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<val:string>
+- BroadcastExchange IdentityBroadcastMode
+- *Project [StartingRange#2L, Endingrange#3L, CountryName#5]
+- *Filter (isnotnull(StartingRange#2L) && isnotnull(Endingrange#3L))
+- *FileScan csv [StartingRange#2L,Endingrange#3L,CountryName#5] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/C:/Users/M1047320/Documents/Spark-301/Documents/GeoIPCountryWhois.csv], PartitionFilters: [], PushedFilters: [IsNotNull(StartingRange), IsNotNull(Endingrange)], ReadSchema: struct<StartingRange:bigint,Endingrange:bigint,CountryName:string>
Join is taking more time (>30 minutes). I have one more inner join on two different dataframe of same size where join condition is "=". Its taking only 5 minutes. How should i improve my code? Please suggest
Please keep the filter condition in where and join the tables based on common column name.I assummed countryname is the common across both DF.
val joined_table= Init.getSparkSession.sql("SELECT hostname,date,path,status,content_size,inputTable.countryName FROM ipasLong Left JOIN inputTable ON ipasLong.countryName=inputTable.countryName
WHERE ipasLongValue >= StartingRange AND ipasLongValue <= Endingrange")
You can also directly join the dataframes.
val result=ipLogs.join(broadcast(ipwithCounryName),"joincondition","left_outer").where($"ipasLongValue" >= StartingRange && $"ipasLongValue" <= Endingrange).select("select columns")
Hope it helps you.
You can try increasing your JVM parameters to the capacity of your system to fully utilize it like below:
spark-submit --driver-memory 12G --conf spark.driver.maxResultSize=3g --executor-cores 6 --executor-memory 16G

Understanding SparkSQL and its usage of partitioning

I am trying to evaluate Spark SQL for some data manipulation queries. The scenario I'm interested in this this:
table1: key, value1, value2
table2: key, value3, value4
create table table3 as
select * from table1 join table2 on table1.key = table2.key
It sounds like I should be able to create the table1 and table2 RDDs (but I don't see a very obvious example of that in the docs).
But the bigger question is this -- if I have successfully partitioned the 2 table RDDs by key and then go to join them with Spark SQL, will it be smart enough to take advantage of the partitioning? And if I create a new RDD as a result of that join, will it also be partitioned? In other words, will it be completely shuffle-free?
I would really appreciate pointers to documentation and or examples on these subjects.
If you mean conversions between RDDs and Datasets then the answer to both question is negative.
RDD partitioning is defined only for RDD[(T, U)] and will be lost after RDD is converted to a Dataset. There are some cases when you can benefit per-existing data layout but join is not one of these especially that RDDs and Datasets use different hashing techniques (standard hashCode and MurmurHash respectively. You can of course mimic the latter one by defining custom partitioner RDD but it is not really the point).
Similarly information about partitioning is lost when Dataset is converted to RDD.
You can use Dataset partitioning to which can be used to optimize joins though. For example if tables have been pre-partitioned:
val n: Int = ???
val df1 = Seq(
("key1", "val1", "val2"), ("key2", "val3", "val4")
).toDF("key", "val1", "val2").repartition(n, $"key").cache
val df2 = Seq(
("key1", "val5", "val6"), ("key2", "val7", "val8")
).toDF("key", "val3", "val4").repartition(n, $"key").cache
subsequent join based on the key won't require additional exchange.
spark.conf.set("spark.sql.autoBroadcastJoinThreshold", -1
df1.explain
// == Physical Plan ==
// InMemoryTableScan [key#171, val1#172, val2#173]
// +- InMemoryRelation [key#171, val1#172, val2#173], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
// +- Exchange hashpartitioning(key#171, 3)
// +- LocalTableScan [key#171, val1#172, val2#173]
df2.explain
// == Physical Plan ==
// InMemoryTableScan [key#201, val3#202, val4#203]
// +- InMemoryRelation [key#201, val3#202, val4#203], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
// +- Exchange hashpartitioning(key#201, 3)
// +- LocalTableScan [key#201, val3#202, val4#203]
//
df1.join(df3, Seq("key")).explain
// == Physical Plan ==
// *Project [key#171, val1#172, val2#173, val5#232, val6#233]
// +- *SortMergeJoin [key#171], [key#231], Inner
// :- *Sort [key#171 ASC], false, 0
// : +- *Filter isnotnull(key#171)
// : +- InMemoryTableScan [key#171, val1#172, val2#173], [isnotnull(key#171)]
// : +- InMemoryRelation [key#171, val1#172, val2#173], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
// : +- Exchange hashpartitioning(key#171, 3)
// : +- LocalTableScan [key#171, val1#172, val2#173]
// +- *Sort [key#231 ASC], false, 0
// +- *Filter isnotnull(key#231)
// +- InMemoryTableScan [key#231, val5#232, val6#233], [isnotnull(key#231)]
// +- InMemoryRelation [key#231, val5#232, val6#233], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
// +- Exchange hashpartitioning(key#231, 3)
// +- LocalTableScan [key#231, val5#232, val6#233]
Obviously we don't really benefit from that on a single join. So it makes sense only if a single table is used for multiple joins.
Also Spark can benefit from the partitioning created by join so if we wanted to perform another join:
val df3 = Seq(
("key1", "val9", "val10"), ("key2", "val11", "val12")
).toDF("key", "val5", "val6")
df1.join(df3, Seq("key")).join(df3, Seq("key"))
we would benefit from the structure created by the first operation (note ReusedExchange):
// == Physical Plan ==
// *Project [key#171, val1#172, val2#173, val5#682, val6#683, val5#712, val6#713]
// +- *SortMergeJoin [key#171], [key#711], Inner
// :- *Project [key#171, val1#172, val2#173, val5#682, val6#683]
// : +- *SortMergeJoin [key#171], [key#681], Inner
// : :- *Sort [key#171 ASC], false, 0
// : : +- Exchange hashpartitioning(key#171, 200)
// : : +- *Filter isnotnull(key#171)
// : : +- InMemoryTableScan [key#171, val1#172, val2#173], [isnotnull(key#171)]
// : : +- InMemoryRelation [key#171, val1#172, val2#173], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
// : : +- Exchange hashpartitioning(key#171, 3)
// : : +- LocalTableScan [key#171, val1#172, val2#173]
// : +- *Sort [key#681 ASC], false, 0
// : +- Exchange hashpartitioning(key#681, 200)
// : +- *Project [_1#677 AS key#681, _2#678 AS val5#682, _3#679 AS val6#683]
// : +- *Filter isnotnull(_1#677)
// : +- LocalTableScan [_1#677, _2#678, _3#679]
// +- *Sort [key#711 ASC], false, 0
// +- ReusedExchange [key#711, val5#712, val6#713], Exchange hashpartitioning(key#681, 200)

Resources