How can I use delta cache function in local spark cluster mode? - apache-spark

I'd like to test delta-cache in local cluster mode (jupyter)
1. What I want to do:
Whole delta-formatted files aren't re-downloaded every time, only new data will be re-downloaded
2. What I've tried
...
# cell1
spark.conf.set("spark.databricks.io.cache.enabled", "true")
# cell2
spark.sql("""
CREATE TABLE my_table2
USING DELTA
LOCATION 'MY_S3_DELTA_FORMATTED_PATH';
""")
# cell3
import time
s = time.time()
spark.sql("select * from my_table2").show()
print(time.time() - s)
But cell3 shows the same time taken every time I run the cell, which means that the table is not cached.
Did I miss something?

https://docs.databricks.com/optimizations/disk-cache.html
https://spark.apache.org/docs/3.0.0-preview/sql-ref-syntax-aux-cache-cache-table.html
Disk cache is different to Spark caching using SQL.
Read the links. Then I think the difference will be apparent. You have missed something.
You need CACHE ... TABLE....

Related

Spark goes java heap space out of memory with a small collect

I've got a problem with Spark, its driver and an OoM issue.
Currently I have a dataframe which is being built with several, joined sources (actually different tables in parquet format), and there are thousands of tuples. They have a date which represents the date of creation of the record, and distinctly they are a few.
I do the following:
from pyspark.sql.functions import year, month
# ...
selectionRows = inputDataframe.select(year('registration_date').alias('year'), month('registration_date').alias('month')).distinct()
selectionRows.show() # correctly shows 8 tuples
selectionRows = selectionRows.collect() # goes heap space OoM
print(selectionRows)
Reading the memory consumption statistics shows that the driver does not exceed ~60%. I thought that the driver should load only the distinct subset, not the entire dataframe.
Am I missing something? Is it possible to collect those few rows in a smarter way? I need them as a pushdown predicate to load a secondary dataframe.
Thank you very much!
EDIT / SOLUTION
After reading the comments and elaborating my personal needs, I cached the dataframe at every "join/elaborate" step, so that in a timeline I do the following:
Join with loaded table
Queue required transformations
Apply the cache transformation
Print the count to keep track of cardinality (mainly for tracking / debugging purposes) and thus apply all transformations + cache
Unpersist the cache of the previous sibiling step, if available (tick/tock paradigm)
This reduced some complex ETL jobs down to 20% of the original time (as previously it was applying the transformations of each previous step at each count).
Lesson learned :)
After reading the comments, I elaborated the solution for my use case.
As mentioned in the question, I join several tables one with each other in a "target dataframe", and at each iteration I do some transformations, like so:
# n-th table work
target = target.join(other, how='left')
target = target.filter(...)
target = target.withColumn('a', 'b')
target = target.select(...)
print(f'Target after table "other": {target.count()}')
The problem of slowliness / OoM was that Spark was forced to do all the transformations from start to finish at each table due to the ending count, making it slower and slower at each table / iteration.
The solution I found is to cache the dataframe at each iteration, like so:
cache: DataFrame = null
# ...
# n-th table work
target = target.join(other, how='left')
target = target.filter(...)
target = target.withColumn('a', 'b')
target = target.select(...)
target = target.cache()
target_count = target.count() # actually do the cache
if cache:
cache.unpersist() # free the memory from the old cache version
cache = target
print(f'Target after table "other": {target_count}')

how to benchmark pypspark queries?

I have got a simple pyspark script and I would like to benchmark each section.
# section 1: prepare data
df = spark.read.option(...).csv(...)
df.registerTempTable("MyData")
# section 2: Dataframe API
avg_earnings = df.agg({"earnings": "avg"}).show()
# section 3: SQL
avg_earnings = spark.sql("""SELECT AVG(earnings)
FROM MyData""").show()
Do generate reliable measurements one would need to run each section multiple times. My solution using the python time module looks like this.
import time
for _ in range(iterations):
t1 = time.time()
df = spark.read.option(...).csv(...)
df.registerTempTable("MyData")
t2 = time.time()
avg_earnings = df.agg({"earnings": "avg"}).show()
t3 = time.time()
avg_earnings = spark.sql("""SELECT AVG(earnings)
FROM MyData""").show()
t4 = time.time()
write_to_csv(t1, t2, t3, t4)
My Question is how would one benchmark each section ? Would you use the time-module as well ? How would one disable caching for pyspark ?
Edit:
Plotting the first 5 iterations of the benchmark shows that pyspark is doing some form of caching.
How can I disable this behaviour ?
First, you can't benchmark using show, it only calculates and returns the top 20 rows.
Second, in general, PySpark API and Spark SQL share the same Catalyst Optimizer behind the scene, so overall what you are doing (using .agg vs avg()) is pretty much similar and don't have much difference.
Third, usually, benchmarking is only meaningful if your data is really big, or your operation is much longer than expected. Other than that, if the runtime difference is only a couple of minutes, it doesn't really matter.
Anyway, to answer your question:
Yes, there is nothing wrong to use time.time() to measure.
You should use count() instead of show(). count would go forward and compute your entire dataset.
You don't have to worry about cache if you don't call it. Spark won't cache unless you ask for it. In fact, you shouldn't cache at all when benchmarking.
You should also use static allocation instead of dynamic allocation. Or if you're using Databricks or EMR, use a fixed amount of workers and don't auto-scale it.

DB Connect and workspace notebooks returns different results

I'm using DB Connect 9.1.9.
My cluster version is 9.1LTS with a single node (for test purposes).
My data is stored on a S3 as a delta table.
Running the following:
df = spark.sql("select * from <table_name> where runDate >= '2022-01-10 14:00:00' and runDate <= '2022-01-10 15:00:00'")
When I run it with DB Connect I get: 31.
When I run it on a Databricks Workspace: 462.
Of course you can't check that numbers, I just wanted to find why do we have a difference.
If I remove the condition on the runDate, I have good results on both platform.
So I deduced that it was the "runDate" fault, but I can't find why.
The schema:
StructType(List(StructField(id,StringType,False),
StructField(runDate,TimestampType,true)))
I have the same explain plan on both platform too.
Did I miss something about Timestamp usage ?
Update 1: it is funny, when I'm putting the count() inside the spark.sql("SELECT count(*) ...") directly I still have 31 rows. It might be the way db-connect translate the query to the cluster.
The problem was the timezone associated to the Spark Session.
Add this after your spark session declaration (in case your dates are stored in UTC):
spark.conf.set("spark.sql.session.timeZone", "UTC")

avoid shuffling and long plans on multiple joins in pyspark

i am doing multiple joins with the same data frame
the data frames i am joining with are result of group by on my original data frame.
listOfCols = ["a","b","c",....]
for c in listOfCols:
means=df.groupby(col(c)).agg(mean(target).alias(f"{c}_mean_encoding"))
df=df.join(means,c,how="left")
this code produces more than 100000 tasks and takes forever to finish.
i see in the dag a lot of shuffling happening.
how can i optimize this code ?
well, after a LOT of tries and failures , i came up with the fastest solution .
instead of 1.5 hours for this job it ran for 5 minutes....
i will put it here so if someone will stumble into it - he/she won't suffer as i did...
the solution was to use spark sql , it must be much more optimized internally than using data frame API:
df.registerTempTable("df")
for c in listOfCols:
left_join_string += f" left join means_{c} on df.{c} = means_{c}.{c}"
means = df.groupby(F.col(c)).agg(F.mean(target).alias(f"{c}_mean_encoding"))
means.registerTempTable(f"means_{c}")
df = sqlContext.sql("SELECT * FROM df "+left_join_string)

Spark window function on dataframe with large number of columns

I have an ML dataframe which I read from csv files. It contains three types of columns:
ID Timestamp Feature1 Feature2...Feature_n
where n is ~ 500 (500 features in ML parlance). The total number of rows in the dataset is ~ 160 millions.
As this is the result of a previous full join, there are many features which do not have values set.
My aim is to run a "fill" function(fillna style form python pandas), where each empty feature value gets set with the previously available value for that column, per Id and Date.
I am trying to achieve this with the following spark 2.2.1 code:
val rawDataset = sparkSession.read.option("header", "true").csv(inputLocation)
val window = Window.partitionBy("ID").orderBy("DATE").rowsBetween(-50000, -1)
val columns = Array(...) //first 30 columns initially, just to see it working
val rawDataSetFilled = columns.foldLeft(rawDataset) { (originalDF, columnToFill) =>
originalDF.withColumn(columnToFill, coalesce(col(columnToFill), last(col(columnToFill), ignoreNulls = true).over(window)))
}
I am running this job on a 4 m4.large instances on Amazon EMR, with spark 2.2.1. and dynamic allocation enabled.
The job runs for over 2h without completing.
Am I doing something wrong, at the code level? Given the size of the data, and the instances, I would assume it should finish in a reasonable amount of time? And I haven't even tried with the full 500 columns, just with about 30!
Looking in the container logs, all I see are many logs like this:
INFO codegen.CodeGenerator: Code generated in 166.677493 ms
INFO execution.ExternalAppendOnlyUnsafeRowArray: Reached spill
threshold of
4096 rows, switching to
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter
I have tried setting parameter spark.sql.windowExec.buffer.spill.threshold to something larger, without any impact. Is theresome other setting I should know about? Those 2 lines are the only ones I see in any container log.
In Ganglia, I see most of the CPU cores peaking around full usage, but the memory usage is lower than the maximum available. All executors are allocated and are doing work.
I have managed to rewrite the fold left logic without using withColumn calls. Apparently they can be very slow for large number of columns, and I was also getting stackoverflow errors because of that.
I would be curious to know why this massive difference - and what exactly happens behind the scenes with the query plan execution, which makes repeated withColumns calls so slow.
Links which proved very helpful: Spark Jira issue and this stackoverflow question
var rawDataset = sparkSession.read.option("header", "true").csv(inputLocation)
val window = Window.partitionBy("ID").orderBy("DATE").rowsBetween(Window.unboundedPreceding, Window.currentRow)
rawDataset = rawDataset.select(rawDataset.columns.map(column => coalesce(col(column), last(col(column), ignoreNulls = true).over(window)).alias(column)): _*)
rawDataset.write.option("header", "true").csv(outputLocation)

Resources