spark number of executors when dynamic allocation is enabled - apache-spark

I have a r5.8xlarge AWS cluster with 12 nodes, so there are 6144 cores (12nodes * 32vCPU * 16cores), I have set --executor-cores=5 and enabled the dynamic execution using the below spark-submit command, even after setting the spark.dynamicAllocation.initialExecutors=150 --conf spark.dynamicAllocation.minExecutors=150, I'm only seeing 70 executors in the spark-UI application, what am I doing wrong?
r5.8xlarge clusters have 256GB per node, so 3072GB(256GB*12nodes)
FYI -I'm not including the driver node in this calculation.
--driver-memory 200G --deploy-mode client --executor-memory 37G --executor-cores 7 --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.driver.maxResultSize=0 --conf spark.sql.shuffle.partitions=2000 --conf spark.dynamicAllocation.initialExecutors=150 --conf spark.dynamicAllocation.minExecutors=150

You have 256GB per node and 37G per executor, an executor can only be in one node (a executor cannot be shared between multiple nodes), so for each node you will have at most 6 executors (256 / 37 = 6), since you have 12 nodes so the max number of executors will be 6 * 12 = 72 executor which explain why you see only 70 executor in your spark ui (the difference of 2 executor's is caused by the memory allocated to the driver or maybe because of some memory allocation problem in some nodes).
If you want more executors then you have to decrease the memory of the executors, also to fully utilize your cluster make sure that the reminder of the the node memory divided by the executor memory is as close to zero as possible, ex:
256GB per node and 37G per executor: 256 / 37 = 6.9 => 6 executor per node (34G lost per node)
256GB per node and 36G per executor: 256 / 36 = 7.1 => 7 executor per node ( only 4G lost per node, so you gain 30G of unused memory per node)
If you want at least 150 executor then executor memory should be at most 19G

Related

spark-submit criteria to set parameter values

I am so confused about the right criteria to use when it comes to setting the following spark-submit parameters, for example:
spark-submit --deploy-mode cluster --name 'CoreLogic Transactions Curated ${var_date}' \
--driver-memory 4G --executor-memory 4G --num-executors 10 --executor-cores 4 \
/etl/scripts/corelogic/transactions/corelogic_transactions_curated.py \
--from_date ${var_date} \
--to_date ${var_to_date}
One person is telling me that I am using a lot of executors and cores but he is not explaining why he said that.
Can someone explain to me the right criteria to use when it comes to setting these parameters (--driver-memory 4G --executor-memory 4G --num-executors 10 --executor-cores 4) according to my dataset?
The same in the following case
spark = SparkSession.builder \
.appName('DemoEcon PEP hist stage') \
.config('spark.sql.shuffle.partitions', args.shuffle_partitions) \
.enableHiveSupport() \
.getOrCreate()
I am not quite sure which is the criteria used to set this parameter "spark.sql.shuffle.partitions"
can someone help me to get this clear in my mind?
Thank you in advance
In this website is the answer that I needed, an excellent explanation, with some examples.
http://site.clairvoyantsoft.com/understanding-resource-allocation-configurations-spark-application/
Here is one of those examples:
Case 1 Hardware – 6 Nodes and each node have 16 cores, 64 GB RAM
First on each node, 1 core and 1 GB is needed for Operating System and Hadoop Daemons, so we have 15 cores, 63 GB RAM for each node
We start with how to choose number of cores:
Number of cores = Concurrent tasks an executor can run
So we might think, more concurrent tasks for each executor will give better performance. But research shows that any application with more than 5 concurrent tasks, would lead to a bad show. So the optimal value is 5.
This number comes from the ability of an executor to run parallel tasks and not from how many cores a system has. So the number 5 stays same even if we have double (32) cores in the CPU
Number of executors:
Coming to the next step, with 5 as cores per executor, and 15 as total available cores in one node (CPU) – we come to 3 executors per node which is 15/5. We need to calculate the number of executors on each node and then get the total number for the job.
So with 6 nodes, and 3 executors per node – we get a total of 18 executors. Out of 18 we need 1 executor (java process) for Application Master in YARN. So final number is 17 executors
This 17 is the number we give to spark using –num-executors while running from spark-submit shell command
Memory for each executor:
From above step, we have 3 executors per node. And available RAM on each node is 63 GB
So memory for each executor in each node is 63/3 = 21GB.
However small overhead memory is also needed to determine the full memory request to YARN for each executor.
The formula for that overhead is max(384, .07 * spark.executor.memory)
Calculating that overhead: .07 * 21 (Here 21 is calculated as above 63/3) = 1.47
Since 1.47 GB > 384 MB, the overhead is 1.47
Take the above from each 21 above => 21 – 1.47 ~ 19 GB
So executor memory – 19 GB
Final numbers – Executors – 17, Cores 5, Executor Memory – 19 GB

how to decide no. of cores and executor in aws

i have a data size of 5 TB and if i decide to use r4.8xlarge EC2 machine which has memory = 244 GB per machine and CPU = 32
how can i now decide no. of cores and executors i have to use?
i tried few combination of cores and executors but spark job fails with heap space issue, i have listed it below excluding other parameters
--master yarn
--conf spark.yarn.executor.memoryOverhead=4000
--driver-memory 25G
--executor-memory 240G
--executor-cores 26
--num-executors 13

Spark-submit executor memory issue

I have a 10 node cluster, 8 DNs(256 GB, 48 cores) and 2 NNs. I have a spark sql job being submitted to the yarn cluster. Below are the parameters which I have used for spark-submit.
--num-executors 8 \
--executor-cores 50 \
--driver-memory 20G \
--executor-memory 60G \
As can be seen above executor-memory is 60GB, but when I check Spark UI is shows 31GB.
1) Can anyone explain me why it is showing 31GB instead of 60GB.
2) Also help in setting optimal values for parameters mentioned above.
I think,
Memory allocated gets divided into two parts:
1. Storage (caching dataframes/tables)
2. Processing (the one you can see)
31gb is the memory available for processing.
Play around with spark.memory.fraction property to increase/decrease the memory available for processing.
I would suggest to reduce the executor cores to about 8-10
My configuration :
spark-shell --executor-memory 40g --executor-cores 8 --num-executors 100 --conf spark.memory.fraction=0.2

Spark increasing the number of executors in yarn mode

I am running Spark over Yarn on a 4 Node Cluster. The configuration of each machine in the node is 128GB Memory, 24 Core CPU per node. I run Spark on using this command
spark-shell --master yarn --num-executors 19 --executor-memory 18g --executor-cores 4 --driver-memory 4g
But Spark only launches 16 executors maximum. I have maximum-vcore allocation in yarn set to 80 (out of the 94 cores i have). So i was under the impression that this will launch 19 executors but it can only go upto 16 executors. Also I don't think even these executors are using the allocated VCores completely.
These are my questions
Why isn't spark creating 19 executors. Is there a computation behind
the scenes that's limiting it?
What is the optimal configuration to run spark-shell given my cluster configuration, if I wanted to get the best possible spark performance
driver-core is set to 1 by default. Will increasing it improve performance.
Here is my Yarn Config
yarn.nodemanager.resource.memory-mb: 106496
yarn..minimum-allocation-mb: 3584
yarn..maximum-allocation-mb: 106496
yarn..minimum-allocation-vcores: 1
yarn..maximum-allocation-vcores: 20
yarn.nodemanager.resource.cpu-vcores: 20
Ok so going by your configurations we have:
(I am also a newbie at Spark but below is what I speculate in this scenario)
24 cores and 128GB ram per node and we have 4 nodes in the cluster.
We allocate 1 core and 1 GB memory for overhead and considering you're running your cluster in YARN-Client mode.
We have 127GB Ram and 23 Cores left with us in 4 nodes.
As mentioned in Cloudera blog YARN runs at optimal performance when 5 cores are allocated per executor at max.
So, 23X4 = 92 Cores.
If we allocated 5 cores per executor then 18 executor have 5 cores and 1 executor has 2 cores or likewise.
So lets assume we have 18 executor in our application and 5 cores per executor.
Spark distributes these 18 executors across 4 nodes. suppose its distributed as:
1st node : 4 executors
2nd node : 4 executors
3rd node : 5 executors
4th node : 5 executors
Now, as 'yarn.nodemanager.resource.memory-mb: 106496' is set as 104GB in your configurations, each node can have max 104 GB memory allocated (I would suggest increasing this parameter).
For nodes with 4 executors: 104/4 - 26GB per executor
For nodes with 5 executors: 104/5 ~ 21GB per executor.
Now leaving out 7% memory for overhead we get 24GB and 20GB.
So i would suggest using following configurations:-
--num-executors : 18
--executor-memory : 20G
--executor-cores : 5
Also, This is considering that you're running your cluster in client mode but if you run your cluster in Yarn-cluster mode 1 node will be allocated fir driver program and the calculations will need to be done differently.
I still cannot comment, so it will be as an answer.
See this question. Could you please decrease executor memory and try run this again?

Using all resources in Apache Spark with Yarn

I am using Apache Spark with Yarn client.
I have 4 worker PCs with 8 vcpus each and 30 GB of ram in my spark cluster.
Im set my executor memory to 2G and number of instances to 33.
My job is taking 10 hours to run and all machines are about 80% idle.
I dont understand the correlation between executor memory and executor instances. Should I have an instance per Vcpu? Should I set the executor memory to be memory of machine/#executors per machine?
I believe that you have to use the following command:
spark-submit --num-executors 4 --executor-memory 7G --driver-memory 2G --executor-cores 8 --class \"YourClassName\" --master yarn-client
Number of executors should be 4, since you have 4 workers. The executor memory should be close to the maximum memory that each yarn node has allocated, roughly ~5-6GB (I assume you have 30GB total RAM).
You should take a look on the spark-submit parameters and fully understand them.
We were using cassandra as our data source for spark. The problem was there were not enough partitions. We needed to split up the data more. Our mapping for # of cassandra partitions to spark partitions was not small enough and we would only generate 10 or 20 tasks instead of 100s of tasks.

Resources