How does hive on spark determine reducer number? - apache-spark

I enable Hive on Spark according to Cloudera documentation 1 and 2. I now find that reducer number behaves unexpectedly. I wish someone could provide detailed documentation or explanation regarding that.
As far as I know, Hive on MR calculates reducer number based on data volume and hive.exec.reducers.bytes.per.reducer, which means bytes per reducer processes, hence job parallelism can be adjusted automatically. But Hive on Spark seems to treat this parameter differently. Though setting it to very low number (<1K) increases reducer number indeed, no common rule can be applied to different jobs.
Below is segment from Cloudera tuning documentation for parallelism.
Adjust hive.exec.reducers.bytes.per.reducer to control how much data each reducer processes, and Hive determines an optimal number of partitions, based on the available executors, executor memory settings, the value you set for the property, and other factors. Experiments show that Spark is less sensitive than MapReduce to the value you specify for hive.exec.reducers.bytes.per.reducer, as long as enough tasks are generated to keep all available executors busy
Also, I understand that RDD in Spark spills data on disk when memory is not sufficient. If that, the following error messages from Hive on Spark jobs really confuse me.
Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 146, fuxi-luoge-105, executor 34): ExecutorLostFailure (executor 34 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 6.2 GB of 6.0 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

Related

Databricks Checksum error while writing to a file

I am running a job in 9 nodes.
All of them are going to write some information to files doing simple writes like below:
dfLogging.coalesce(1).write.format('delta').mode('append').save('/dbfs/' + loggingLocation)
However I am receiving this exception:
py4j.protocol.Py4JJavaError: An error occurred while calling
o106.save. : java.util.concurrent.ExecutionException:
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 1 in stage 14.0 failed 1 times, most recent failure: Lost task
1.0 in stage 14.0 (TID 259, localhost, executor driver): org.apache.hadoop.fs.ChecksumException: Checksum error:
file:/dbfs/delta/Logging/_delta_log/00000000000000000063.json at 0
exp: 1179219224 got: -1020415797
It looks to me, that because of concurrency, spark is somehow failing and it generates checksum errors.
Is there any known scenario that may be causing it?
So there are a couple of things going on and it should explain why coalesce may not work.
What coalesce does is it essentially combines the partitions across each worker. For example, if you have three workers, you can perform coalesce(3) which would consolidate the partitions on each worker.
What repartition does is it shuffles the data to increase/decrease the number of total partitions. In your case, if you have more than one worker and if you need a single output, you would have to use repartition(1) since you want the data to be on a single partition before writing it out.
Why coalesce would not work?
Spark limits the shuffling during coalesce. So you cannot perform a full shuffle (across different workers) when you are using coalesce, whereas you can perform a full shuffle when you are using repartition, although it is an expensive operation.
Here is the code that would work:
dfLogging.repartition(1).write.format('delta').mode('append').save('/dbfs/' + loggingLocation)

Spark Sql Job optimization

I have a job which consist around 9 sql statement to pull data from hive and write back to hive db. It is currently running for 3hrs which seems too long considering spark abitlity to process data. The application launchs total 11 stages.
I did some analysis using Spark UI and found below grey areas which can be improved:
Stage 8 in Job 5 has shuffle output of 1.5 TB.
Time gap between job 4 and job 5 is 20 Mins. I read about this time gap and found spark perform IO out of spark job which reflects as gap between two jobs which can be seen in driver logs.
We have a cluster of 800 nodes with restricted resources for each queue and I am using below conf to submit job:
-- num-executor 200
-- executor-core 1
-- executor-memory 6G
-- deployment mode client
Attaching Image of UI as well.
Now my questions are:
Where can I find driver log for this job?
In image, I see a long list of Executor added which I sum is more than 200 but in Executor tab, number is exactly 200. Any explation for this?
Out of all the stages, only one stage has TASK around 35000 but rest of stages has 200 tasks only. Should I increase number of executor or should I go for dynamic allocation facility of spark?
Below are the thought processes that may guide you to some extent:
Is it necessary to have one core per executor? The executor need not be fat always. You can have more cores in one executor. it is a trade-off between creating a slim vs fat executors.
Configure shuffle partition parameter spark.sql.shuffle.partitions
Ensure while reading data from Hive, you are using Sparksession (basically HiveContext). This will pull the data into Spark memory from HDFS and schema information from Metastore of Hive.
Yes, Dynamic allocation of resources is a feature that helps in allocating the right set of resources. It is better than having fixed allocation.

Spark 2.x - Shuffle on "small" data crashes "big" executors

My (Py)Spark 2.1.1 app consists in two executors with 5 cores and 30G heap (spark.executor.memory) each. I have 3.2Gb of data persisted in memory (deserialized) spread on a dozen partitions and shared between my two executors (1.9Gb + 1.3Gb). I then want to repartition this data by calling repartition('myCol') on my persisted dataframe with myCol having only three keys with a 60-20-20 distribution. I then want to write the repartitionned data in (3) .parquet files. As expected, this transformation triggers a full shuffle of the data :
First question : In the Spark UI, Shuffle Write amounts to 5.9Gb. Why is this amount much higher than the size of the persisted data ? Is it the format Spark uses to write shuffle files on disk (text strings?) ? Replication ?
Second question : My executors keep dying with error messages such as org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle or ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 32.0 GB of 32 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.. spark.yarn.executor.memoryOverhead is already set at 2g but I must confess I don't really get how this parameter should help in that context. But the main question is : how shuffling 3Gb of data can OOM a 30Gb executor ?
I changed a few parameters from the understanding I have of Spark (with limited success obviously) : I set spark.memory.fraction to 0.9 and spark.memory.storageFraction to 0.0.
Many thanks in advance for any help, this situation is so frustrating.
PS : Maybe once the issue is solved I can redesign my app with less memory per executor. It currently feels like a terrible waste of ressources to me.

Max Executor failures in Spark dynamic allocation

I am using dynamic allocation feature of spark to run my spark job. It allocates around 50-100 executors. For some reason few executors are lost resulting in shutting down the job. Log shows that this happened due to max executor failures reached. It is set to 3 by default. Hence when 3 executors are lost the job gets killed even if other 40-50 executors are running.
I know that I can change the max executor failure limit but this seems like a workaround. Is there something else that I can try. All suggestions are welcome.

SPARK: YARN kills containers for exceeding memory limits

We're currently encountering an issue where Spark jobs are seeing a number of containers being killed for exceeding memory limits when running on YARN.
16/11/18 17:58:52 WARN TaskSetManager: Lost task 53.0 in stage 49.0 (TID 32715, XXXXXXXXXX):
ExecutorLostFailure (executor 23 exited caused by one of the running tasks)
Reason: Container killed by YARN for exceeding memory limits. 12.4 GB of 12 GB physical memory used.
Consider boosting spark.yarn.executor.memoryOverhead.
The following arguments are being passed via spark-submit:
--executor-memory=6G
--driver-memory=4G
--conf "spark.yarn.executor.memoryOverhead=6G"`
I am using Spark 2.0.1.
We have increased the memoryOverhead to this value after reading several posts about YARN killing containers (e.g. How to avoid Spark executor from getting lost and yarn container killing it due to memory limit?).
Given my parameters and the log message it does seem that "Yarn kills executors when its memory usage is larger than (executor-memory + executor.memoryOverhead)".
It is not practical to continue increasing this overhead in the hope that eventually we find a value at which these errors do not occur. We are seeing this issue on several different jobs. I would appreciate any suggestions as to parameters I should change, things I should check, where I should start looking to debug this etc. Am able to provide further config options etc.
You can reduce the memory usage with the following configurations in spark-defaults.conf:
spark.default.parallelism
spark.sql.shuffle.partitions
And there is a difference when you use more than 2000 partitions for spark.sql.shuffle.partitions. You can see it in the code of spark on Github:
private[spark] object MapStatus {
def apply(loc: BlockManagerId, uncompressedSizes: Array[Long]): MapStatus = {
if (uncompressedSizes.length > 2000) {
HighlyCompressedMapStatus(loc, uncompressedSizes)
} else {
new CompressedMapStatus(loc, uncompressedSizes)
}
}
I recommend to try to use more than 2000 Partitions for a test. It could be faster some times, when you use very huge datasets. And according to this your tasks can be short as 200 ms. The correct configuration is not easy to find, but depending on your workload it can make a difference of hours.

Resources