Cross Join in Apache Spark with dataset is very slow - apache-spark

I have posted this question on spark user forum but received no response so asking it here again.
We have a use case where we need to do a Cartesian join and for some reason we are not able to get it work with Dataset API's.
We have two dataset:
one data set with 2 string columns say c1, c2. It is a small data set with ~1 million records. The two columns are both strings of 32 characters so should be less than 500 mb.
We broadcast this dataset
the other data set is little bigger with ~10 million records
val ds1 = spark.read.format("csv").option("header", "true").load(<s3-location>).select("c1", "c2")
ds1.count
val ds2 = spark.read.format("csv").load(<s3-location>).toDF("c11", "c12", "c13", "c14", "c15", "ts")
ds2.count
ds2.crossJoin(broadcast(ds1)).filter($"c1" <= $"c11" && $"c11" <= $"c2").count
If I implement it using RDD api where I broadcast data in ds1 and then filter data in ds2 it works fine.
I have confirmed the broadcast is successful.
2019-02-14 23:11:55 INFO CodeGenerator:54 - Code generated in 10.469136 ms
2019-02-14 23:11:55 INFO TorrentBroadcast:54 - Started reading broadcast variable 29
2019-02-14 23:11:55 INFO TorrentBroadcast:54 - Reading broadcast variable 29 took 6 ms
2019-02-14 23:11:56 INFO CodeGenerator:54 - Code generated in 11.280087 ms
Query Plan:
== Physical Plan ==
BroadcastNestedLoopJoin BuildRight, Cross, ((c1#68 <= c11#13) && (c11#13 <= c2#69))
:- *Project []
: +- *Filter isnotnull(_c0#0)
: +- *FileScan csv [_c0#0,_c1#1,_c2#2,_c3#3,_c4#4,_c5#5] Batched: false, Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [], PushedFilters: [IsNotNull(_c0)], ReadSchema: struct<_c0:string,_c1:string,_c2:string,_c3:string,_c4:string,_c5:string>
+- BroadcastExchange IdentityBroadcastMode
+- *Project [c1#68, c2#69]
+- *Filter (isnotnull(c1#68) && isnotnull(c2#69))
+- *FileScan csv [c1#68,c2#69] Batched: false, Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [], PushedFilters: [IsNotNull(c1), IsNotNull(c2)], ReadSchema: struct
then the stage do not progress.
I updated the code to use broadcast ds1 and then did the join in the mapPartitions for ds2.
val ranges = spark.read.format("csv").option("header", "true").load(<s3-location>).select("c1", "c2").collect
val rangesBC = sc.broadcast(ranges)
then used this rangesBC in the mapPartitions method to identify the range each row in ds2 belongs and this job completes in 3 hrs, while the other job does not complete even after 24 hrs. This kind of implies that the query optimizer is not doing what I want it to do.
What am I doing wrong? Any pointers will be helpful. Thank you!

I have run into this issue recently and found that Spark has a strange partitioning behavior when cross joining large dataframes. If your input dataframe contain few million records, then the cross joined dataframe has partitions equal to the multiplication of the input dataframes partition, that is
Partitions of crossJoinDF = (Partitions of ds1) * (Partitions of ds2).
If ds1 or ds2 contain about few hundred partitions then the cross join dataframe would have partitions in the range of ~ 10,000. These are way too many partitions, which result in excessive overhead in managing many small tasks, making any computation (in your case - filter) on the cross joined data frame very slow to run.
So how do you make the computation faster? First check if this is indeed the issue for your problem:
scala> val crossJoinDF = ds2.crossJoin(ds1)
# This should return immediately because of spark lazy evaluation
scala> val crossJoinDFPartitions = crossJoinDF.rdd.partitions.size
Check the number of the partitions on the cross joined dataframe. If crossJoinDFPartitions > 10,000, then you do indeed have the same issue i.e cross joined dataframe has way too many partitions.
To make your operations on cross joined dataframe faster, reduce the number of partitions on the input DataFrames. For example:
scala> val ds1 = ds1.repartition(40)
scala> ds1.rdd.partitions.size
res80: Int = 40
scala> val ds2 = ds2.repartition(40)
scala> ds2.rdd.partitions.size
res81: Int = 40
scala> val crossJoinDF = ds1.crossJoin(ds2)
scala> crossJoinDF.rdd.partitions.size
res82: Int = 1600
scala> crossJoinDF.count()
The count() action should result in execution of the cross join. The count should now return in a reasonable amount of time. The number of exact partitions you choose would depend on number of cores available in your cluster.
The key takeaway here is to make sure that your cross joined dataframe has reasonable number of partitions (<< 10,000). You might also find this post useful which explains this issue in more detail.

I do not know if you are on bare metal or AWS with spot or on-demand or dedicated, or VMs with AZURE, et al. My take:
Appreciate that 10M x 1M is a lot of work, even if .filter applies on the resultant cross join. It will take some time. What were your expectations?
Spark is all about scaling in a linear way in general.
Data Centers with VMs do not have dedicated and hence do not have the fastest performance.
Then:
I ran on Databricks 10M x 100K in a simulated set-up with .86 core and 6GB on Driver for Community Edition. That ran in 17 mins.
I ran the 10M x 1M in your example on a 4 node AWS EMR non-dedicated Cluster (with some EMR-oddities like reserving the Driver on a valuable instance!) it took 3 hours for partial completion. See the picture below.
So, to answer your question:
- You did nothing wrong.
Just just need more resources allowing more parallelisation.
I did add some explicit partitioning as you can see.

Related

Is there any way to count how many partitions reached in Spark query on Hadoop?

I want to stop a Spark query if the query takes more than 10 mins.
But this is for just for one partition.
I mean if query reaches 2 partition in Hadoop so the time will be 20 mins.
For example, for this I need a 10 mins threshold:
SELECT Max(col1),
Min(col2)
FROM my_parititoned_table_on_hadoop
WHERE partitioned_column = 1
For this I need a 20 mins threshold:
SELECT Max(col1),
Min(col2)
FROM my_parititoned_table_on_hadoop
WHERE partitioned_column IN ( 1, 2 )
Is this possible?
The answer to the question in your title ("Is there any way to count how many partitions...") is a "yes" if your data is stored as parquet. You can run explain() on your query and see how many partitions will be scanned during query execution. For example
scala> spark.sql("select * from tab where p > '1' and p <'4'").explain()
== Physical Plan ==
*(1) FileScan parquet default.tab[id#375,desc#376,p#377] Batched: true, Format: Parquet,
Location: PrunedInMemoryFileIndex[hdfs://ns1/user/hive/warehouse/tab/p=2, hdfs://ns1/user/hive/warehouse...,
**PartitionCount: 2,** PartitionFilters: [isnotnull(p#377), (p#377 > 1), (p#377 < 4)],
PushedFilters: [], ReadSchema: struct<id:int,desc:string>
...from which PartitionCount: x can be parsed quite easily.
The second question (which is technically a statement -- "I want to stop a Spark query if the query takes more than 10 mins") is a "no", just as #thebluephantom said.
No. There is no such support in Spark.
And there is AQE that for some queries may alter the number of partitions / tasks on the fly? What would that mean?

Spark Performance issue - Writing partitions to S3 as individual files

I'm running a spark job whose job is to scan a large file and split it into smaller files. The file is in Json Lines format and I'm trying to partition it by a certain column (id) and save each partition as a separate file to S3. The file size is about 12 GB but there are about 500000 distinct values of id. The query is taking almost 15 hours. What can I do to improve performance? Is Spark a poor choice for such a task? Please note that I do have the liberty to making sure that the source as a fixed number of rows per id.
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql import functions as F
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.window import Window
from awsglue.utils import getResolvedOptions
from awsglue.transforms import *
from pyspark.sql.functions import udf, substring, instr, locate
from datetime import datetime, timedelta
sc = SparkContext.getOrCreate()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
# Get parameters that were passed to the job
args = getResolvedOptions(sys.argv, ['INPUT_FOLDER', 'OUTPUT_FOLDER', 'ID_TYPE', 'DATASET_DATE'])
id_type = args["ID_TYPE"]
output_folder = "{}/{}/{}".format(args["OUTPUT_FOLDER"], id_type, args["DATASET_DATE"])
input_folder = "{}/{}/{}".format(args["INPUT_FOLDER"], id_type, args["DATASET_DATE"])
INS_SCHEMA = StructType([
StructField("camera_capture_timestamp", StringType(), True),
StructField(id_type, StringType(), True),
StructField("image_uri", StringType(), True)
])
data = spark.read.format("json").load(input_folder, schema=INS_SCHEMA)
data = data.withColumn("fnsku_1", F.col("fnsku"))
data.coalesce(1).write.partitionBy(["fnsku_1"]).mode('append').json(output_folder)
I have tried repartition instead of coalesce too.
I'm using AWS Glue
Please consider the following as one of possible options. It would be awesome to see if it helped :)
First, if you coalesce, as said #Lamanus in the comments, it means that you will reduce the number of partitions, hence also reduce the number of
writer task, hence shuffle all data to 1 task. It can be the first factor to improve.
To overcome the issue, ie. write a file per partition and keep the parallelization level, you can change the logic on the following one:
object TestSoAnswer extends App {
private val testSparkSession = SparkSession.builder()
.appName("Demo groupBy and partitionBy").master("local[*]")
.getOrCreate()
import testSparkSession.implicits._
// Input dataset with 5 partitions
val dataset = testSparkSession.sparkContext.parallelize(Seq(
TestData("a", 0), TestData("a", 1), TestData("b", 0), TestData("b", 1),
TestData("c", 1), TestData("c", 2)
), 5).toDF("letter", "number")
dataset.as[TestData].groupByKey(row => row.letter)
.flatMapGroups {
case (_, values) => values
}.write.partitionBy("letter").mode("append").json("/tmp/test-parallel-write")
}
case class TestData(letter: String, number: Int)
How does it work?
First, the code performs a shuffle to collect all rows related to a specific key (same as for the partitioning) to the same
partitions. So that, it will perform the write on all the rows belonging to the key at once. Some time ago I wrote a blog
post about partitionBy method. Roughly, internally it will sort the records on the given partition and later write them
one-by-one into the file.
That way we get the plan like this one, where only 1 shuffle, so processing-consuming operation is present:
== Physical Plan ==
*(2) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, TestData, true])).letter, true, false) AS letter#22, knownnotnull(assertnotnull(input[0, TestData, true])).number AS number#23]
+- MapGroups TestSoAnswer$$$Lambda$1236/295519299#55c50f52, value#18.toString, newInstance(class TestData), [value#18], [letter#3, number#4], obj#21: TestData
+- *(1) Sort [value#18 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(value#18, 200), true, [id=#15]
+- AppendColumnsWithObject TestSoAnswer$$$Lambda$1234/1747367695#6df11e91, [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, knownnotnull(assertnotnull(input[0, TestData, true])).letter, true, false) AS letter#3, knownnotnull(assertnotnull(input[0, TestData, true])).number AS number#4], [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#18]
+- Scan[obj#2]
The output of the TestSoAnswer executed twice looks like that:
test-parallel-write % ls
_SUCCESS letter=a letter=b letter=c
test-parallel-write % ls letter=a
part-00170-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00170-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
test-parallel-write % ls letter=b
part-00161-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00161-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
test-parallel-write % ls letter=c
part-00122-68245d8b-b155-40ca-9b5c-d9fb746ac76c.c000.json part-00122-cd90d64f-43c6-4582-aae6-fe443b6617f4.c000.json
You can also control the number of records written per file with this configuration.
Edit: Didn't see the comment of #mazaneicha but indeed, you can try with repartition("partitioning column")! It's even more clear than the grouping expression.
Best,
Bartosz.
If you're not going to use Spark for anything other than to split the file into smaller versions of itself, then I would say Spark is a poor choice. You'd be better off doing this within AWS following an approach such as the one given in this Stack Overflow post
Assuming you have an EC2 instance available, you'd run something like this:
aws s3 cp s3://input_folder/12GB.json - | split -l 1000 - output.
aws s3 cp output.* s3://output_folder/
If you're looking to do some further processing of the data in Spark, you're going to want to repartition the data to chunks between 128MB and 1 GB. With the default (snappy) compression, you typically end up with 20% of the original file size. So, in your case: between (12/5) ~3 and (12/5/8) ~20 partitions, so:
data = spark.read.format("json").load(input_folder, schema=INS_SCHEMA)
dataPart = data.repartition(12)
This is not actually a particularly large data set for Spark and should not be as cumbersome to deal with.
Saving as parquet gives you a good recovery point, and re-reading the data will be very fast. The total file size will be about 2.5 GB.

Spark: does DataFrameWriter have to be a blocking step?

I have data partitioned by a column (say, id) and I have this dataset saved some place. Every now and then, I get a smaller incremental dataset with the same structure and I essentially have to upsert my existing data based on my id with a date column deciding which record is the newest. (I don't write it in the same place, I save the whole new blob some place else.)
There are two ways I've been doing this - either grouping in a window and taking the row with the highest date. Or via dropDuplicates, relying on the fact, that my data is ordered. (I'd rather use the former, but I've been trying various things.)
The one big issue is that each id group is not negligible (a few gigabytes), so I was hoping Spark (with n workers) would understand that since I'm reading id-partitioned data and writing id-partitioned data, it would process n ids at once and continually write them to my storage, taking new ids as it's finished with the previous ones.
Unfortunately, what seems to be happening, is that Spark processes all my id groups in one big job (and spills to disk, naturally) before writing anything to disk. It gets really really slow.
The question is thus: Is there a way to force Spark to process these groups and write them as soon as they're ready? Again, they are partitioned, so no other task will affect my partition.
Here's a bit of code that reproduces the problem:
# generate dummy data first
import random
from typing import List
from datetime import datetime, timedelta
from pyspark.sql.functions import desc, col, row_number
from pyspark.sql.window import Window
from pyspark.sql.dataframe import DataFrame
def gen_data(n: int) -> List[tuple]:
names = 'foo, bar, baz, bak'.split(', ')
return [(random.randint(1, 25), random.choice(names), datetime.today() - timedelta(days=random.randint(1, 100))) \
for j in range(n)]
def get_df(n: int) -> DataFrame:
return spark.createDataFrame(gen_data(n), ['id', 'name', 'date'])
n = 10_000
df = get_df(n)
dd = get_df(n*10)
df.write.mode('overwrite').partitionBy('id').parquet('outputs/first')
dd.write.mode('overwrite').partitionBy('id').parquet('outputs/second')
d1 and d2 are both partitioned by id and so is the resulting dataset, but it's not reflected in the plan:
w = Window().partitionBy('id').orderBy(desc('date'))
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
d1.union(d2).\
withColumn('rn', row_number().over(w)).filter(col('rn') == 1).drop('rn').\
write.mode('overwrite').partitionBy('id').parquet('outputs/window')
I also tried to explicitly state the partition key (otherwise the code is the same):
d1 = spark.read.parquet('outputs/first').repartition('id')
d2 = spark.read.parquet('outputs/second').repartition('id')
d1.union(d2).\
withColumn('rn', row_number().over(w)).filter(col('rn') == 1).drop('rn').\
write.mode('overwrite').partitionBy('id').parquet('outputs/window2')
Here's the same using dropDuplicates:
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
d1.union(d2).\
dropDuplicates(subset=['id']).\
write.mode('overwrite').partitionBy('id').parquet('outputs/window3')
I also tried emphasising that my union is still partitioned using something like this, but again to no avail:
df.union(d2).repartition('id').\
.withColumn...
I could list all partitions (ids), load them one by one while leveraging partition pruning, deduplicating and writing. But that seems like extra boilerplate that shouldn't be necessary. Or is it possible to do this via foreach?
Update (2018-03-27):
Turns out, the information about partitioning is indeed present in the window functionality in one way or another, because when I filter at the very end, partition pruning on the inputs does take place:
d1 = spark.read.parquet('outputs/first')
d2 = spark.read.parquet('outputs/second')
w = Window().partitionBy('id', 'name').orderBy(desc('date'))
d1.union(d2).withColumn('rn', row_number().over(w)).filter(col('rn') == 1).filter(col('id') == 12).explain(True)
Results in
== Physical Plan ==
*(4) Filter (isnotnull(rn#387) && (rn#387 = 1))
+- Window [row_number() windowspecdefinition(id#187, name#185, date#186 DESC NULLS LAST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS rn#387], [id#187, name#185], [date#186 DESC NULLS LAST]
+- *(3) Sort [id#187 ASC NULLS FIRST, name#185 ASC NULLS FIRST, date#186 DESC NULLS LAST], false, 0
+- Exchange hashpartitioning(id#187, name#185, 200)
+- Union
:- *(1) FileScan parquet [name#185,date#186,id#187] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/.../spark_perf_partitions/outputs..., PartitionCount: 1, PartitionFilters: [isnotnull(id#187), (id#187 = 12)], PushedFilters: [], ReadSchema: struct<name:string,date:timestamp>
+- *(2) FileScan parquet [name#191,date#192,id#193] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/.../spark_perf_partitions/outputs..., PartitionCount: 1, PartitionFilters: [isnotnull(id#193), (id#193 = 12)], PushedFilters: [], ReadSchema: struct<name:string,date:timestamp>
So it indeed only reads two partitions, one per each file. So I could, instead of looping, just run the code with one filter at a time (the filter being between the window function and .write). Tedious and not very practical, but potentially faster than spilling everything to disk.
Yes this is exactly how spark partitioning works. So it computes the whole lineage and then write in a partitioned form on the disk. There are several advantages for that. One of the important reason is parallel write. So when the computation is done spark can write all the partitions in parallel on the disk. This significantly improves the performance.
If you want to write as an when the data is ready you might as well filter on the dataframe by different Ids and compute the process in a loop and write. However, in my experience this approach requires several iterations on the same dataframe resulting huge performance loss.

Spark bucketing read performance

Spark version - 2.2.1.
I've created a bucketed table with 64 buckets, I'm executing an aggregation function select t1.ifa,count(*) from $tblName t1 where t1.date_ = '2018-01-01' group by ifa . I can see that 64 tasks in Spark UI, which utilize just 4 executors (each executor has 16 cores) out of 20. Is there a way I can scale out the number of tasks or that's how bucketed queries should run (number of running cores as the number of buckets)?
Here's the create table:
sql("""CREATE TABLE level_1 (
bundle string,
date_ date,
hour SMALLINT)
USING ORC
PARTITIONED BY (date_ , hour )
CLUSTERED BY (ifa)
SORTED BY (ifa)
INTO 64 BUCKETS
LOCATION 'XXX'""")
Here's the query:
sql(s"select t1.ifa,count(*) from $tblName t1 where t1.date_ = '2018-01-01' group by ifa").show
With bucketing, the number of tasks == number of buckets, so you should be aware of the number of cores/tasks that you need/want to use and then set it as the buckets number.
num of task = num of buckets is probably the most important and under-discussed aspect of bucketing in Spark. Buckets (by default) are historically solely useful for creating "pre-shuffled" dataframes which can optimize large joins. When you read a bucketed table all of the file or files for each bucket are read by a single spark executor (30 buckets = 30 spark tasks when reading the data) which would allow the table to be joined to another table bucketed on the same # of columns. I find this behavior annoying and like the user above mentioned problematic for tables that may grow.
You might be asking yourself now, why and when in the would I ever want to bucket and when will my real-world data grow exactly in the same way over time? (you probably partitioned your big data by date, be honest) In my experience you probably don't have a great use case to bucket tables in the default spark way. BUT ALL IS NOT LOST FOR BUCKETING!
Enter "bucket-pruning". Bucket pruning only works when you bucket ONE column but is potentially your greatest friend in Spark since the advent of SparkSQL and Dataframes. It allows Spark to determine which files in your table contain specific values based on some filter in your query, which can MASSIVELY reduce the number of files spark physically reads, resulting in hugely efficient and fast queries. (I've taken 2+hr queries down to 2 minutes and 1/100th of the Spark workers). But you probably don't care because of the # of buckets to tasks issue means your table will never "scale-up" if you have too many files per bucket, per partition.
Enter Spark 3.2.0. There is a new feature coming that will allow bucket pruning to stay active when you disable bucket-based reading, allowing you to distribute the spark reads with bucket-pruning/scan. I also have a trick for doing this with spark < 3.2 as follows.
(note the leaf-scan for files with vanilla spark.read on s3 is added overhead but if your table is big it doesn't matter, bc your bucket optimized table will be a distributed read across all your available spark workers and will now be scalable)
val table = "ex_db.ex_tbl"
val target_partition = "2021-01-01"
val bucket_target = "valuex"
val bucket_col = "bucket_col"
val partition_col = "date"
import org.apache.spark.sql.functions.{col, lit}
import org.apache.spark.sql.execution.FileSourceScanExec
import org.apache.spark.sql.execution.datasources.{FileScanRDD,FilePartition}
val df = spark.table(tablename).where((col(partition_col)===lit(target_partition)) && (col(bucket_col)===lit(bucket_target)))
val sparkplan = df.queryExecution.executedPlan
val scan = sparkplan.collectFirst { case exec: FileSourceScanExec => exec }.get
val rdd = scan.inputRDDs.head.asInstanceOf[FileScanRDD]
val bucket_files = for
{ FilePartition(bucketId, files) <- rdd.filePartitions f <- files }
yield s"$f".replaceAll("path: ", "").split(",")(0)
val format = bucket_files(0).split("
.").last
val result_df = spark.read.option("mergeSchema", "False").format(format).load(bucket_files:_*).where(col(bucket_col) === lit(bucket_target))

Will a persisted-dataframe be calculated repeatedly many times?

I've got the following structured query:
val A = 'load somedata from HDFS'.persist(StorageLevel.MEMORY_AND_DISK_SER)
val B = A.filter('condition 1')
val C = A.filter('condition 2')
val D = A.filter('condition 3')
val E = A.filter('condition 4')
val F = A.filter('condition 5')
val G = A.filter('condition 6')
val H = A.filter('condition 7')
val I = B.union(C).union(D).union(E).union(F).union(G).union(H)
I persist the dataframe A, so that when I use B/C/D/E/F/G/H, the A dataframe should be calculated only once? But the DAG of this job is below:
From the DAG above, it seems that stage 6-12 are all executed and the dataframe A is calculated 7 times?
Why would this happen?
Maybe the DAG is just fake? I found that there are not lines on the top of stage 7-12 where stage 6 does have two lines from other stage
I didn't list all the operations. After union operation, I save the I dataframe to HDFS. Will this action on the I dataframe make the persist operation be done really? Or must I do an action operation such as count on the A dataframe to trigger the persist operation before reuse A dataframe?
Doing the following line won't persist your dataset.
val A = 'load somedata from HDFS'.persist(StorageLevel.MEMORY_AND_DISK_SER)
Caching/persistence is lazy when used with Dataset API so you have to trigger the caching using count operator or similar that in turn submits a Spark job.
After that all the following operators, filter including, should use InMemoryTableScan with the green dot in the plan (as shown below).
In your case even after union the dataset I is not cached since you have not triggered the caching (but merely marked it for caching).
After union operation, I save the I dataframe to HDFS. Will this action on the I dataframe make the persist operation be done really?
Yes. Only actions (like saving to an external storage) can trigger the persistence for future reuse.
Or must I do an action operation such as count on the A dataframe to trigger the persist operation before reuse A dataframe?
That's the point! In your case, since you want to reuse A dataframe across filter operators you should persist it first, count (to trigger the caching) followed by filters.
In your case, no filter will benefit from any performance increase due to persist. That persist is practically void of any impact on the performance and just makes a code reviewer think it's otherwise.
If you want to see when and if your dataset is cached, you can check out Storage tab in web UI or ask CacheManager about it.
val nums = spark.range(5).cache
nums.count
scala> spark.sharedState.cacheManager.lookupCachedData(nums)
res0: Option[org.apache.spark.sql.execution.CachedData] =
Some(CachedData(Range (0, 5, step=1, splits=Some(8))
,InMemoryRelation [id#0L], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
+- *Range (0, 5, step=1, splits=8)
))

Resources