I am using Apache Spark with Yarn client.
I have 4 worker PCs with 8 vcpus each and 30 GB of ram in my spark cluster.
Im set my executor memory to 2G and number of instances to 33.
My job is taking 10 hours to run and all machines are about 80% idle.
I dont understand the correlation between executor memory and executor instances. Should I have an instance per Vcpu? Should I set the executor memory to be memory of machine/#executors per machine?
I believe that you have to use the following command:
spark-submit --num-executors 4 --executor-memory 7G --driver-memory 2G --executor-cores 8 --class \"YourClassName\" --master yarn-client
Number of executors should be 4, since you have 4 workers. The executor memory should be close to the maximum memory that each yarn node has allocated, roughly ~5-6GB (I assume you have 30GB total RAM).
You should take a look on the spark-submit parameters and fully understand them.
We were using cassandra as our data source for spark. The problem was there were not enough partitions. We needed to split up the data more. Our mapping for # of cassandra partitions to spark partitions was not small enough and we would only generate 10 or 20 tasks instead of 100s of tasks.
Related
I was able to create YARN containers for my spark jobs.
I have come across various blogs and youtube videos to efficiently use --executors-cores (use values from 4 -6 for efficient throughput) and --executor memory after reserving 1 CPU cores and 1GB RAM for hadoop deamons and determined the right values for each executor.
I also came across articles like these.
I am checking how many containers are created by YARN from spark shell and i am not able to understand how the containers are allocated.
For example i have created EMR cluster with 1 master node m5.xlarge (4 vcore , 16 Gib) and 1 core node with instance type c5.2xlarge ( 8 vcore and 16 Gib RAM)
When i create the spark shell with the following command spark-shell --num-executors=6 --executor-cores=5 --conf spark.executor.memoryOverhead=1G --executor-memory 1G --driver-memory 1G
i see that 6 executors including a driver are being created with 5 cores for each executor for a total of 25 cores
However the metrics from hadoop history server does not reflect the right calculations
I am very confused how in spark UI , more cores than available were allocated for each executor . The total vcores in the cluster is 8 cores considering the core nodes but a total of 25 executors are allocated for the executors.
Can someone please explain what i am missing.
I have an Emr cluster for spark with below configuration of 2 Instances.
r4.2xlarge
8 vCore
So my total vCores is 16 and the same is reflected in yarn Vcores
I have submitted a spark streaming job with parameters --num-executors 2 --executor-cores 5. So I was assuming it will use up 2*5 total 10 vcores for executors, but what it's doing only using 2 cores in total from the cluster (+1 for the driver)
.
And in spark, the job is still running with parallel tasks of 10 (2*5). Seems like it's just running only 5 threads within each executor core.
I have read in different questions and in documentation --executor-cores uses actual vCores but here, it only running tasks as threads.
Is my understanding correct here?
I have a problem with tuning Spark jobs executing on Yarn cluster. I'm having a feeling that I'm not getting most of my cluster and additionally, my jobs fail (executors get removed all the time).
I have the following setup:
4 machines
each machine has 10GB of RAM
each machine has 8 cores
8GBs of RAM are allocated for yarn jobs
14 (of 16) virtual cores are allocated for yarn jobs
I have run my spark job (actually connected to a jupyter notebook) using different setups, e.g.
pyspark --master yarn --num-executors 7 --executor-cores 4 --executor-memory 3G
pyspark --master yarn --num-executors 7 --executor-cores 7 --executor-memory 2G
pyspark --master yarn --num-executors 11 --executor-cores 4 --executor-memory 1G
I've tried different combinations and none of them seems to be working as my executors get destroyed. Additionally, I've read somewhere that it is a good way to increase spark.yarn.executor.memoryOverhead to 600MB as a way not to loose executors (and I did that), but seems that doesn't help. How should I setup my job?
Additionally, it confuses me that when I look at the ResourceManager UI it says for my job vcores used 8 vcores total 56. It seems that I'm using a single core per executor, but I don't understand why?
One more thing, when I setup my job, how many partitions should I specify when I'm reading data from HDFS to get maximal performance?
Donald Knuth said premature optimisation is the root of all evil. I am sure faster running program which fails is on no use. Start by giving all the memory to one executor. Say 7GB/8GB and just 1 core. This is a complete wastage of cores, but if it works, it proves your application can possibly run on this hardware. If even this doesn't work, you should try getting bigger machines. Assuming it works, try increasing the number of cores, until it still works.
The gist of the argument is: your application requires certain memory per task. But the number of tasks running per executor is dependent on number of cores. First find the worst case memory per cores for you application and then you can set executor memory and cores to some multiple of this number.
I am running Spark over Yarn on a 4 Node Cluster. The configuration of each machine in the node is 128GB Memory, 24 Core CPU per node. I run Spark on using this command
spark-shell --master yarn --num-executors 19 --executor-memory 18g --executor-cores 4 --driver-memory 4g
But Spark only launches 16 executors maximum. I have maximum-vcore allocation in yarn set to 80 (out of the 94 cores i have). So i was under the impression that this will launch 19 executors but it can only go upto 16 executors. Also I don't think even these executors are using the allocated VCores completely.
These are my questions
Why isn't spark creating 19 executors. Is there a computation behind
the scenes that's limiting it?
What is the optimal configuration to run spark-shell given my cluster configuration, if I wanted to get the best possible spark performance
driver-core is set to 1 by default. Will increasing it improve performance.
Here is my Yarn Config
yarn.nodemanager.resource.memory-mb: 106496
yarn..minimum-allocation-mb: 3584
yarn..maximum-allocation-mb: 106496
yarn..minimum-allocation-vcores: 1
yarn..maximum-allocation-vcores: 20
yarn.nodemanager.resource.cpu-vcores: 20
Ok so going by your configurations we have:
(I am also a newbie at Spark but below is what I speculate in this scenario)
24 cores and 128GB ram per node and we have 4 nodes in the cluster.
We allocate 1 core and 1 GB memory for overhead and considering you're running your cluster in YARN-Client mode.
We have 127GB Ram and 23 Cores left with us in 4 nodes.
As mentioned in Cloudera blog YARN runs at optimal performance when 5 cores are allocated per executor at max.
So, 23X4 = 92 Cores.
If we allocated 5 cores per executor then 18 executor have 5 cores and 1 executor has 2 cores or likewise.
So lets assume we have 18 executor in our application and 5 cores per executor.
Spark distributes these 18 executors across 4 nodes. suppose its distributed as:
1st node : 4 executors
2nd node : 4 executors
3rd node : 5 executors
4th node : 5 executors
Now, as 'yarn.nodemanager.resource.memory-mb: 106496' is set as 104GB in your configurations, each node can have max 104 GB memory allocated (I would suggest increasing this parameter).
For nodes with 4 executors: 104/4 - 26GB per executor
For nodes with 5 executors: 104/5 ~ 21GB per executor.
Now leaving out 7% memory for overhead we get 24GB and 20GB.
So i would suggest using following configurations:-
--num-executors : 18
--executor-memory : 20G
--executor-cores : 5
Also, This is considering that you're running your cluster in client mode but if you run your cluster in Yarn-cluster mode 1 node will be allocated fir driver program and the calculations will need to be done differently.
I still cannot comment, so it will be as an answer.
See this question. Could you please decrease executor memory and try run this again?
I did some testing on r3.8 xlarge cluster, each instance has 32 cores, and 244G memory.
If I set spark.executor.cores=16, spark.executor.memory=94G, there're 2 executors per instance, but when I set spark.executor.memory larger than 94G, there will be only one executor per instance;
If I set spark.executor.cores=8, spark.executor.memory=35G, there're 4 executors per instance, but when I set spark.executor.memory larger than 35, there will be no larger than 3 executors per instance.
So, my question is, how does the executor number come out by memory set? What's the formula? I though the Spark just simply use 70% of the physical memory to allocate to the executors but seems I'm wrong...
In Yarn mode you need to set number of executor by num-executors and executor memory by executor-memory. Here's a example:
spark-submit --master yarn-cluster --executor-memory 6G --num-executors 31 --executor-cores 32 example.jar Example
Now each executor requests a container from yarn with 6G + memory overhead and 1 core.
More info on spark documentation
Regarding the behavior you're seeing it sounds like the amount of memory available to your YARN NodeManagers is actually less than the 244GB that is available to the OS. To verify this, take a look at your YARN ResourceManager Web UI and you can see how much memory is availible in total across the cluster. This is determined from the yarn.nodemanager.resource.memory-mb in yarn-site.xml.
To answer your question about how the number of executors is determined: In YARN, if you're using spark with dynamicAllocation.enabled set to true, the number of executors is limited above dynamicAllocation.minExecutors and below dynamicAllocation.maxExecutors.
Other than that you're then subjected to YARN's resource allocation which, for most schedulers, will allocate resources to fill up a given queue that your job runs in.
In the situation where you have a totally unutilized cluster with one YARN queue and you submit a job to it, the Spark job will continue to add executors with the given number of cores and memory amount until the entire cluster is full (or there is not enough cores/memory for an additional executor to be allocated).