Spark generates different execution plans for reading Orc table - apache-spark

We have a PySpark script that basically joins 3 large tables and aggregates the results. When running the script from a Jupyter notebook, everything is fine and fast - it completes without errors in a reasonable amount of time.
Then we try to launch the script as part of our pipeline (runs in Airflow, but I don't think it matters), and it fails most of the times. We have a lot of lost executors, retries, and usually it fails at the end, although couple of times it did succeed.
I've noticed in the Spark UI, that the amount of data that the 'pipeline' script reads in one of the stages is larger than the 'notebook' one does (something like 700GB instead of 500GB), and writes more shuffle data. That surprised me, as the scripts are absolutely the same (loaded from Git), and they are using the same tables and the same dates. I've checked the execution plans, and found only 1 difference - for 1 of the tables, the 'notebook' script generates 'Scan hive' action, whereas the 'pipeline' generates 'FileScan orc'. Other 2 tables are accessed in the same way by both environments - one of them with 'Scan hive' and another one with 'FileScan orc'. That also puzzles me a bit, but as long as the 'notebook' version runs fine, I don't mind.
The 'pipeline' environment is more powerful than the 'notebook' one, it has more executors with more memory. They run in the same Kubernetes cluster. The Spark version is 3.0.1 for both of them. All the tables are read using spark.read(table_name). The tables are in Orc format and are stored in HDFS on-prem.
I'm not sure if this is the source of the problem, but apparently there are some differences in the configuration between those environments, so I would like to know why is the operation different. I would be grateful to have any clue on how Spark decides which method to use.
We tried increasing executor memory settings, increasing the number executors, played with the number of partitions, setting spark.sql.hive.convertMetastoreOrc and so on - nothing helps, it still fails most of the times.

Related

HBase batch loading with speed control cause of slow consumer

We need to load a big part of data from HBase using Spark.
Then we put it into Kafka and read by consumer. But consumer is too slow
At the same time Kafka memory is not enough to keep all scan result.
Our key contain ...yyyy.MM.dd, and now we load 30 days in one Spark job, using operator filter.
But we cant split job to many jobs, (30 jobs filtering each day), cause then each job will have to scan all HBase, and it will make summary scan to slow.
Now we launch Spark job with 100 threads, but we cant make speed slower by set less threads (for example 7 threads). Cause Kafka is used by third hands developers, that make Kafka sometimes too busy to keep any data. So, we need to control HBase scan speed, checking all time is there a memory in Kafka to store our data
We try to save scan result before load to Kafka into some place, for example in ORC files in hdfs, but scan result make many little files, it is problem to group them by memory (or there is a way, if you know please tell me how?), and store into hdfs little files bad. And merging such a files is very expensive operation and spend a lot of time that will make total time too slow
Sugess solutions:
Maybe it is possible to store scan result in hdfs by spark, by set some special flag in filter operator and then run 30 spark jobs to select data from saved result and put each result to Kafka when it possible
Maybe there is some existed mechanism in spark to stop and continue launched jobs
Maybe there is some existed mechanism in spark to separate result by batches (without control to stop and continue loading)
Maybe there is some existed mechanism in spark to separate result by batches (with control to stop and continue loading by external condition)
Maybe when Kafka will throw an exception (that there is no place to store data), there is some backpressure mechanism in spark that will stop scan for some time if there some exceptions appear in execution (but i guess that there is will be limited retry of restarting to execute operator, is it possible to set restart operation forever, if it is a real solution?). But better to keep some free place in Kafka, and not to wait untill it will be overloaded
Do using PageFilter in HBase (but i guess that it is hard to realize), or other solutions variants? And i guess that there is too many objects in memory to use PageFilter
P.S
This https://github.com/hortonworks-spark/shc/issues/108 will not help, we already use filter
Any ideas would be helpful

Apache Spark: is it possible to get dataset counts in a spark job?

Sometimes some Spark job which runs in our cluster runs too long not because of bad optimization, but because of bad logic of the algorithm. In most cases this is a consequence of some unnecessary joins that produce too many rows. Normally we spot such jobs by looking at Spark execution plan where we can find such joins by looking at "number of output rows: xxx" in blue stage labels.
I want to understand - is it possible to optimize this procedure and somehow automatically notify the programmer that the job has too many rows in some dataset (after execution)?
Maybe we can print this in logs (without manually counting dataset's size in code)?
Maybe after running the job we can get the output of the execution plan somehow and save it for further investigations?
No, it's not an option. Spark will do its best to optimize the query plan, so manual interaction with lower execution level is pretty much limited. However, you can "control" the rows for each jobs/tasks by changing some configurations (like spark.sql.shuffle.partitions or spark.sql.files.maxPartitionBytes), or by repartitioning data, which will cause data to be shuffled and re-distributed nearly equally between executors.

Dilemma about Spark partitions

I am working on a project where I have to read S3 files (each about 3MB zipped) using boto3. I have a small pyspark script that runs every hour to process the file and generate 2 types of output data which is written back to S3. The pyspark script uses 'xmltodict' python library to read some static data into a dictionary object needed for file processing. I have a small Amazon EMR cluster v5.28 running with 1 Master and 1 Core. This might be excessive but is not my main concern right now.
Questions:
1. How do I know 'IF' i should partition the data? I have read articles on how many partitions to create, etc but couldn't find anything on IF and WHEN. What is the criteria that drives partitioning - number of rows, columns, data type, actions taken in the script, etc in the source data file? I read the source file into an RDD and convert it to a DF and perform various operations by adding columns, grouping data, counting data, etc. How does spark handle partitioning behind the scenes?
2. Currently, I manually execute the pyspark script as follows:
spark-submit --master spark://x.x.x.x:7077 --deploy-mode client test.py
on the master node as I have decided to stick with Standalone CM. The 'xmltodict' is installed on this node, but is not installed on the Core node. It doesn't seem like it needs to be installed or even python3 configured on Core node since I am not seeing any errors. Is that correct and can somebody shed some light on this confusion? I tried to install the python libraries via shell file as a bootstrap
when I created the cluster, but it failed and quite frankly after trying it a few times, I gave up.
3. Based on partitioning I think I am slightly confused on whether or not to use coalesce() or collect(). Again, the question is when to use and when not to?
Sorry too many questions. Now, that I have the pyspark script written, I am trying to work the efficiencies.
Thanks
Partitioning is the mechanism with which data is divided into optimum size chunks and based on that multiple tasks are run, each processing one piece of data. As you see this is the core of parallelism and without this there is no significant use of Spark (or any bigdata processing framework). Most of the file formats are splittable and some are splittable when compressed like Avro, parquet, orc etc. Some file formats are not splittable when compressed like - zip, gzip etc. Based on the size of the file being processed and their ability to be split, Spark automatically creates multiple partitions and processes data in parallel. In your case the data being zip, one file will be one partition and no more than 1 CPU can work on it at once. If this zip is small then its ok, but if it is big then its processing will be slow.

spark repartition / executor inconsistencies commandline vs jupyter

I wasn't really sure what to title this question -- happy for a suggested better summary
I'm beating my head trying to figure out why a dead simple spark job works fine from Jupyter, but from the command line is left with insufficient executors to progress.
What I'm trying to do: I have a large amount of data (<1TB) from which I need to extract a small amount of data (~1GB) and save as parquet.
Problem I have: when my dead-simple code is run from the command line, I only get as many executors as I have final partitions, which is ideally one given it is small. The same exact code works just fine in Jupyter, same cluster, where it tasks out >10k tasks across my entire cluster. The commandline version never progresses. Since it doesn't produce any logs beyond reporting lack of progress, i'm not sure where more to dig.
I have tried both python3 mycode.py and spark-submit mycode.py with lots of variations to no avail. My cluster has dynamicAllocation configured.
import findspark
findspark.init('/usr/lib/spark/')
from pyspark.sql import SparkSession
spark = SparkSession.builder.enableHiveSupport().getOrCreate()
data = spark.read.parquet(<datapath>).select(<fields>)
subset = [<list of items>]
spark.sparkContext.broadcast(subset)
data.filter(field.isin.(subset)).coalesce(1).write.parquet("output")
** edit: original version mistakenly had repartition(1) instead of coalesce.
In this case, run from the command line, my process will get one executor.
In my logs, the only real hint I get is
WARN TaskSetManager: Stage 1 contains a task of very large size (330 KB). The maximum recommended task size is 100 KB.
which makes sense given the lack of resources being allocated.
I have tried to manually force the number of executors using spark-submit runtime settings. In that case, it will start with my initial settings and then immediately start bringing them down until there is only one and nothing progresses.
Any ideas? thanks.
I ended up phoning a friend on this one...
the code that was running fine in JupyterHub, but not via the commandline was essentially a:
read parquet,
filter on some small field,
coalesce(1)
write parquet
I had assumed that coalesce(1) and repartition(1) should have the same results -- even though coalesce(N) and repartition(N) do not -- given that they all go to one partition.
According to my friend, Spark sometimes optimizes coalesce(1) to a single task, which was the behavior I saw. By changing it to repartition(1), everything works fine.
I still have no idea why it works fine in JupyterHub --- having done >20 experiments -- and never on the commandline -- also >20 experiements.
But, if you want to take your data lake to a data puddle this way, use repartition(1) or repartition(n), where n is small, instead of coalesce.

Spark task duration difference

I'm running application that loads data (.csv) from s3 into DataFrames, and than register those Dataframes as temp tables. After that, I use SparkSQL to join those tables and finally write result into db. Issue that is currently bottleneck for me is that I feel tasks are not evenly split and i get no benefits or parallelization and multiple nodes inside cluster. More precisely, this is distribution of task duration in problematic stage
task duration distribution
Is there way for me to enforce more balanced distribution ? Maybe manually writing map/reduce functions ?
Unfortunately, this stage has 6 more tasks that are still running (1.7 hours atm), which will prove even greater deviation.
There are two likely possibilities: one is under your control and .. unfortunately one is likely not ..
Skewed data. Check that the partitions are of relatively similar size - say within a factor of three or four.
Inherent variability of Spark tasks runtime. I have seen behavior of large delays in stragglers on Spark Standalone, Yarn, and Mesos without an apparent reason. The symptoms are:
extended periods (minutes) where little or no cpu or disk activity were occurring on the nodes hosting the straggler tasks
no apparent correlation of data size to the stragglers
different nodes/workers may experience the delays on subsequent runs of the same job
One thing to check: do hdfs dfsadmin -report and hdfs fsck to see if hdfs were healthy.

Resources