Why is spark dataframe repartition faster than coalesce when reducing number of partitions? - apache-spark

I have a df with 100 partitions, and before saving to HDFS as .parquet I want to reduce the number of partitions because the parquet files would be too small (<1MB).
I've added coalesce before writing:
df.coalesce(3).write.mode("append").parquet(OUTPUT_LOC)
It works but slows down the process from 2-3s per file to 10-20s per file.
When I try repartition:
df.repartition(3).write.mode("append").parquet(OUTPUT_LOC)
The process does not slow down at all, 2-3s per file.
Why? Shouldn't coalesce always be faster when reducing the number of partitions because it avoids a full shuffle?
Background:
I'm importing files from local storage to spark cluster and saving the resulting dataframes as a parquet file. Each file is approx 100-200MB.
Files are located on the "spark-driver" machine, I'm running spark-submit in client deploy mode.
I'm reading files one by one in driver:
data = read_lines(file_name)
rdd = sc.parallelize(data,100)
rdd2 = rdd.flatMap(lambda j: myfunc(j))
df = rdd2.toDF(mySchema)
df.repartition(3).write.mode("append").parquet(OUTPUT_LOC)
Spark version is 3.1.1
Spark/HDFS cluster has 5 workers with 8CPU,32GB RAM
Each executor has 4cores and 15GB RAM, that makes 10 executors total.
EDIT:
When I use coalesce(1) I get spark.rpc.message.maxSize limit breached error, but not when I use repartition(1). Could that be a clue?
Attaching DAG visualizations .. Looks like WholeStageCodegen part is taking too long on coalesce DAGs?

This can happen sometimes if your data is not evenly distributed and when you do coalesce it tries to reduce the partitions by combining the small partitions in order to reduce full shuffle but there could still be some data skew in one of the partition and that single partition would be taking the most of the time.
While you do repartition the data gets distributed almost evenly on all the partitions as it does full shuffle and all the tasks would almost get completed in the same time.
You could use the spark UI to see why when you are doing coalesce what is happening in terms of tasks and do you see any single task running long.

Related

Is spark partition size is equal to HDFS block size or depends on the number of cores available on all executors?

I am looking through spark partitioning and I see different answers for the question.
Is spark partition size is equal to HDFS block size or depends on the number of cores available on all executors?, and Does the performance improves by repartitioning the data in skewed data case? (I assume the data related to the same join key is again shuffled back to a single executor during the join). Please help me understand this. Thanks!
It really depends on your data where from you are reading. If you are reading from HDFS, then one block will be one partition. But if you are reading a parquet file, then one parquet file is one partition as it is not splittable, so depending on the block in case of HDFS and files count in case of parquet, it creates partitions.
Regarding the skewed data, the more data one partition has, the more time it takes to finish the execution. The other tasks will finished quickly as they have less data so the resources are not being utilized properly. Therefore, it is always better to repartition the skewed data properly, so all executors can evenly do the execution.
You can look here for all the available RDDs, and how they are creating partitions:
https://github.com/apache/spark/tree/master/core/src/main/scala/org/apache/spark/rdd

Breaking lineage of an RDD without relying on HDFS

I'm running a spark application on Amazon spot instances. In the end, I'm exporting my results to parquet files on S3. The tasks are memory intensive, so I have to run the initial calculations using a large number of partitions (hundreds of thousands). In the end, I would like to coalesce the partitions to a few large partitions and save them to big parquet files. And this is where I get into trouble:
- If I'm using .coalesce(), which is a narrow transformation, the entire lineage that precedes the coalesce will be executed on a small number of partitions, which will cause OOMs.
- If I'm using .repartition(), I rely on HDFS for the shuffle files.
This is a problem when using spot instances, which may be decommissioned, leaving corrupt/missing HDFS blocks.
- checkpointing also relies on HDFS so I can't use that.
- converting to a Dataframe and back didn't actually break the lineage (rdd.toDF.rdd, am I missing something?).
To conclude, I'm looking for a way to coalesce to a smaller amount of partitions only to persist the data on S3 - I would like for the calculation to happen using the original partitions.

Spark behavior on native file system

We are experimenting to run Spark in our project without Hadoop and no distributed storage like HDFS. Spark is installed on a single node with 10 Cores and 16GB RAM and this node is not part of any cluster. Assuming Spark driver takes 2 cores and the rest of them are consumed by executors(2 each) at the time of execution.
If we process a big CSV file (of size 1 GB) stored in local disk in Spark as RDD and repartition it to 4 different partitions, will executors process each partition in parallel?
What would executors do if we don't repartition the RDD to 4 diff partitions?
Do we loose the power of distributed computing and parallelism if dont use HDFS?
Spark caps the maximum size of a partition at 2G, so you should be able to process the entire data with minimal partitioning and quicker processing time. You can set spark.executor.cores to 8 so as to utilize all you resources.
Ideally, you should set the number of partitions depending on the size of your data, and you are better off setting the number of partitions as a multiple of cores/executors.
To answer your question, setting number of partitions to 4 in your case will probably result in each partition being sent to an executor. So yes, each partition will be processed in parallel.
If you don't repartition, then Spark will do it for you depending on the data and split the load between the executors.
Spark works perfectly fine without Hadoop. You might see a negligible performance drop since your files are on the local filesystem and not on HDFS, but for a file of size 1GB it really doesn't matter.

Why Spark create less partitions than the number of files whem reading from S3

I'm using Spark 2.3.1.
I have a job that reads 5.000 small parquet files into s3.
When I do a mapPartitions followed by a collect, only 278 tasks are used (I would have expected 5000). Why ?
Spark is grouping multiple files into each partition due to their small size. You should see as much when you print out the partitions.
Example (Scala):
val df = spark.read.parquet("/path/to/files")
df.rdd.partitions.foreach(println)
If you want to use 5,000 task you could do a repartition transformation.
Quote from the docs about repartition:
Reshuffle the data in the RDD randomly to create either more or fewer
partitions and balance it across them. This always shuffles all data
over the network.
I recommend you take a look at the RDD Programming Guide. Remember that shuffle is an expensive operation.

Why join in spark in local mode is so slow?

I am using spark in local mode and a simple join is taking too long. I have fetched two dataframes: A (8 columns and 2.3 million rows) and B(8 columns and 1.2 million rows) and joining them using A.join(B,condition,'left') and called an action at last. It creates a single job with three stages, each for two dataframes extraction and one for joining. Surprisingly stage with extraction of dataframe A is taking around 8 minutes and that of dataframe B is taking 1 minute. And join happens within seconds. My important configuration settings are:
spark.master local[*]
spark.driver.cores 8
spark.executor.memory 30g
spark.driver.memory 30g
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.sql.shuffle.partitions 16
The only executor is driver itself. While extracting dataframes, i have partitioned it in 32(also tried 16,64,50,100,200) parts. I have seen shuffle write memory to be 100 MB for Stage with dataframe A extraction. So to avoid shuffle i made 16 initial partitions for both dataframes and broadcasted dataframe B(smaller), but it is not helping. There is still shuffle write memory. I have used broadcast(B) syntax for this. Am I doing something wrong? Why shuffling is still there? Also when i see event timelines its showing only four cores are processing at any point of time. Although I have a 2core*4 processor machine.Why is that so?
In short, "Join"<=>Shuffling, the big question here is how uniformly are your data distributed over partitions (see for example https://0x0fff.com/spark-architecture-shuffle/ , https://www.slideshare.net/SparkSummit/handling-data-skew-adaptively-in-spark-using-dynamic-repartitioning and just Google the problem).
Few possibilities to improve efficiency:
think more about your data (A and B) and partition data wisely;
analyze, are your data skewed?;
go into UI and look at the tasks timing;
choose such keys for partitions that during "join" only few partitions from dataset A shuffle with few partitions of B;

Resources