Spark: use of driver-memory parameter - apache-spark

When I submit this command, my job failed with error "Container is running beyond physical memory limits".
spark-submit --master yarn --deploy-mode cluster --executor-memory 5G --total-executor-cores 30 --num-executors 15 --conf spark.yarn.executor.memoryOverhead=1000
But adding the parameter: --driver-memory to 5GB (or upper), the job ends without error.
spark-submit --master yarn --deploy-mode cluster --executor-memory 5G --total executor-cores 30 --num-executors 15 --driver-memory 5G --conf spark.yarn.executor.memoryOverhead=1000
Cluster info: 6 nodes with 120GB of Memory. YARN Container Memory Minimum: 1GB
The question is: what is the difference in using or not this parameter?

If increasing the driver memory is helping you to successfully complete the job then it means that driver is having lots of data coming into it from executors. Typically, the driver program is responsible for collecting results back from each executor after the tasks are executed. So, in your case it seems that increasing the driver memory helped to store more results back into the driver memory.
If you read the some points on executor memory, driver memory and the way Driver interacts with executors then you will get better clarity on the situation you are in.
Hope it helps to some extent.

Related

Spark client mode - YARN allocates a container for driver?

I am running Spark on YARN in client mode, so I expect that YARN will allocate containers only for the executors. Yet, from what I am seeing, it seems like a container is also allocated for the driver, and I don't get as many executors as I was expecting.
I am running spark submit on the master node. Parameters are as follows:
sudo spark-submit --class ... \
--conf spark.master=yarn \
--conf spark.submit.deployMode=client \
--conf spark.yarn.am.cores=2 \
--conf spark.yarn.am.memory=8G \
--conf spark.executor.instances=5 \
--conf spark.executor.cores=3 \
--conf spark.executor.memory=10G \
--conf spark.dynamicAllocation.enabled=false \
While running this application, Spark UI's Executors page shows 1 driver and 4 executors (5 entries in total). I would expect 5, not 4 executors.
At the same time, YARN UI's Nodes tab shows that on the node that isn't actually used (at least according to Spark UI's Executors page...) there's a container allocated, using 9GB of memory. The rest of the nodes have containers running on them, 11GB of memory each.
Because in my Spark Submit the driver has 2GB less memory than executors, I think that the 9GB container allocated by YARN is for the driver.
Why is this extra container allocated? How can i prevent this?
Spark UI:
YARN UI:
Update after answer by Igor Dvorzhak
I was falsely assuming that the AM will run on the master node, and that it will contain the driver app (so setting spark.yarn.am.* settings will relate to the driver process).
So I've made the following changes:
set the spark.yarn.am.* settings to defaults (512m of memory, 1 core)
set the driver memory through spark.driver.memory to 8g
did not try to set driver cores at all, since it is only valid for cluster mode
Because AM on default settings takes up 512m + 384m of overhead, its container fits into the spare 1GB of free memory on a worker node.
Spark gets the 5 executors it requested, and the driver memory is appropriate to the 8g setting. All works as expected now.
Spark UI:
YARN UI:
Extra container is allocated for YARN application master:
In client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
Even though in client mode driver runs in the client process, YARN application master is still running on YARN and requires container allocation.
There are no way to prevent container allocation for YARN application master.
For reference, similar question asked time ago: Resource Allocation with Spark and Yarn.
You can specify the driver memory and number of executors in spark submit as below.
spark-submit --jars..... --master yarn --deploy-mode cluster --driver-memory 2g --driver-cores 4 --num-executors 5 --executor-memory 10G --executor-cores 3
Hope it helps you.

spark on yarn,only one executor on one node works and the allocation is random

i run spark-shell with this command:
./bin/spark-shell --master yarn --num-executors 16
--executor-memory 14G --executor-cores 8
i have four nodes,every node has 16G memory and 4cores
after i changed num-executors,the spark webUi tell me it worked,
BUT only one executor on node which named "slave" is running
we could see TRANSFER1 and TRANSFER2 is empty
how can i solve this, when i submit job the situation has not changed
should i change worker-instances or num-executors and so on

Spark - Capping the number of CPU cores or memory of slave servers

I am using Spark 2.1. This question is for use cases where some of Spark slave servers run other apps as well. Is there a way to tell the Spark Master server to to use only certain # of CPU cores or memory of a slave server ?
Thanks.
To limit the number of cores used by a spark job, you need to add the --total-executor-cores option into your spark-submit command. To limit the amount of memory used by each executor, use the --executor-memory option. For example:
spark-submit --total-executor-cores 10 \
--executor-memory 8g \
--class com.example.SparkJob \
SparkJob.jar
This also works with spark-shell
spark-shell --total-executor-cores 10 \
--executor-memory 8g

Yarn Spark HBase - ExecutorLostFailure Container killed by YARN for exceeding memory limits

I am trying to read a big hbase table in spark (~100GB in size).
Spark Version : 1.6
Spark submit parameters:
spark-submit --master yarn-client --num-executors 10 --executor-memory 4G
--executor-cores 4
--conf spark.yarn.executor.memoryOverhead=2048
Error: ExecutorLostFailure Reason: Container killed by YARN for
exceeding limits.
4.5GB of 3GB physical memory used limits. Consider boosting spark.yarn.executor.memoryOverhead.
I have tried setting spark.yarn.executor.memoryOverhead to 100000. Still getting similar error.
I don't understand why spark doesn't spill to disk if the memory is insufficient OR is YARN causing the problem here.
Please share your code how you try to read in.
And also your cluster architecture
Container killed by YARN for exceeding limits. 4.5GB of 3GB physical memory used limits
Try
spark-submit
--master yarn-client
--num-executors 4
--executor-memory 100G
--executor-cores 4
--conf spark.yarn.executor.memoryOverhead=20480
If you have 128 gRam
The situation is clear, you run out of ram, try to rewrite your code in a disk friendly way.

Spark executor GC taking long

Am running Spark job on a standalone cluster and I noticed after sometime the GC starts taking long and the red scary color begins to show.
Here is the resources available:
Cores in use: 80 Total, 76 Used
Memory in use: 312.8 GB Total, 292.0 GB Used
Job details:
spark-submit --class com.mavencode.spark.MonthlyReports
--master spark://192.168.12.14:7077
--deploy-mode cluster --supervise
--executor-memory 16G --executor-cores 4
--num-executors 18 --driver-cores 8
--driver-memory 20G montly-reports-assembly-1.0.jar
How do I fix the GC time taking so long?
I had the same problem and could resolve it by using Parallel GC instead of G1GC. You may add the following options to the executors additional Java options in the submit request
-XX:+UseParallelGC -XX:+UseParallelOldGC

Resources