Spark tuning job - apache-spark

I have a problem with tuning Spark jobs executing on Yarn cluster. I'm having a feeling that I'm not getting most of my cluster and additionally, my jobs fail (executors get removed all the time).
I have the following setup:
4 machines
each machine has 10GB of RAM
each machine has 8 cores
8GBs of RAM are allocated for yarn jobs
14 (of 16) virtual cores are allocated for yarn jobs
I have run my spark job (actually connected to a jupyter notebook) using different setups, e.g.
pyspark --master yarn --num-executors 7 --executor-cores 4 --executor-memory 3G
pyspark --master yarn --num-executors 7 --executor-cores 7 --executor-memory 2G
pyspark --master yarn --num-executors 11 --executor-cores 4 --executor-memory 1G
I've tried different combinations and none of them seems to be working as my executors get destroyed. Additionally, I've read somewhere that it is a good way to increase spark.yarn.executor.memoryOverhead to 600MB as a way not to loose executors (and I did that), but seems that doesn't help. How should I setup my job?
Additionally, it confuses me that when I look at the ResourceManager UI it says for my job vcores used 8 vcores total 56. It seems that I'm using a single core per executor, but I don't understand why?
One more thing, when I setup my job, how many partitions should I specify when I'm reading data from HDFS to get maximal performance?

Donald Knuth said premature optimisation is the root of all evil. I am sure faster running program which fails is on no use. Start by giving all the memory to one executor. Say 7GB/8GB and just 1 core. This is a complete wastage of cores, but if it works, it proves your application can possibly run on this hardware. If even this doesn't work, you should try getting bigger machines. Assuming it works, try increasing the number of cores, until it still works.
The gist of the argument is: your application requires certain memory per task. But the number of tasks running per executor is dependent on number of cores. First find the worst case memory per cores for you application and then you can set executor memory and cores to some multiple of this number.

Related

Why do I only have 2 Spark executors? [duplicate]

So I have a spark standalone server with 16 cores and 64GB of RAM. I have both the master and worker running on the server. I don't have dynamic allocation enabled. I am on Spark 2.0
What I dont understand is when I submit my job and specify:
--num-executors 2
--executor-cores 2
Only 4 cores should be taken up. Yet when the job is submitted, it takes all 16 cores and spins up 8 executors regardless, bypassing the num-executors parameter. But if I change the executor-cores parameter to 4 it will adjust accordingly and 4 executors will spin up.
Disclaimer: I really don't know if --num-executors should work or not in standalone mode. I haven't seen it used outside YARN.
Note: As pointed out by Marco --num-executors is no longer in use on YARN.
You can effectively control number of executors in standalone mode with static allocation (this works on Mesos as well) by combining spark.cores.max and spark.executor.cores where number of executors is determined as:
floor(spark.cores.max / spark.executor.cores)
For example:
--conf "spark.cores.max=4" --conf "spark.executor.cores=2"

Spark job fails when cluster size is large, succeeds when small

I have a spark job which takes in three inputs and does two outer joins. The data is in key-value format (String, Array[String]). Most important part of the code is:
val partitioner = new HashPartitioner(8000)
val joined = inputRdd1.fullOuterJoin(inputRdd2.fullOuterJoin(inputRdd3, partitioner), partitioner).cache
saveAsSequenceFile(joined, filter="X")
saveAsSequenceFile(joined, filter="Y")
I'm running the job on EMR with r3.4xlarge driver node and 500 m3.xlarge worker nodes. The spark-submit parameters are:
spark-submit --deploy-mode client --master yarn-client --executor-memory 3g --driver-memory 100g --executor-cores 3 --num-executors 4000 --conf spark.default.parallelism=8000 --conf spark.storage.memoryFraction=0.1 --conf spark.shuffle.memoryFraction=0.2 --conf spark.yarn.executor.memoryOverhead=4000 --conf spark.network.timeout=600s
UPDATE: with this setting, number of executors seen in spark jobs UI were 500 (one per node)
The exception I see in the driver log is the following:
17/10/13 21:37:57 WARN HeartbeatReceiver: Removing executor 470 with no recent heartbeats: 616136 ms exceeds timeout 600000 ms
17/10/13 21:39:04 ERROR ContextCleaner: Error cleaning broadcast 5
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [600 seconds]. This timeout is controlled by spark.network.timeout at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcEnv.scala:214)
...
Some of the things I tried that failed:
I thought the problem would be because of there are too many executors being spawned and driver has an overhead of tracking these executors. I tried reducing the number of executors by increasing the executor-memory to 4g. This did not help.
I tried changing the instance type of driver to r3.8xlarge, this did not help either.
Surprisingly, when I reduce the number of worker nodes to 300, the job runs file. Does any one have any other hypothesis on why this would happen?
Well this is a little bit a problem to understand how is the allocation of Spark works.
According to your information, you have 500 nodes with 4 cores each. So, you have 4000 cores. What you are doing with your request is creating 4000 executors with 3 cores each. It means that you are requesting 12000 cores for your cluster and there is no thing like that.
This error of RPC timeout is regularly associated with how many jvms you started in the same machine, and that machine is not able to respond in proper time due to much thing happens at the same time.
You need to know that, --num-executors is better been associated to you nodes, and the number of cores should be associated to the cores you have in each node.
For example, the configuration of m3.xLarge is 4 cores with 15 Gb of RAM. What is the best configuration to run a job there? That depends what you are planning to do. See if you are going to run just one job I suggest you to set up like this:
spark-submit --deploy-mode client --master yarn-client --executor-memory 10g --executor-cores 4 --num-executors 500 --conf spark.default.parallelism=2000 --conf spark.yarn.executor.memoryOverhead=4000
This will allow you job to run fine, if you don't have problem to fit your data to your worker is better change the default.parallelism to 2000 or you are going to lost lot of time with shuffle.
But, the best approach I think that you can do is keeping the dynamic allocation that EMR enables it by default, just set the number of cores and the parallelism and the memory and you job will run like a charm.
I experimented with lot of configurations modifying one parameter at a time with 500 nodes. I finally got the job to work by lowering the number of partitions in the HashPartitioner from 8000 to 3000.
val partitioner = new HashPartitioner(3000)
So probably the driver is overwhelmed with a the large number of shuffles that has to be done when there are more partitions and hence the lower partition helps.

Spark Standalone Number Executors/Cores Control

So I have a spark standalone server with 16 cores and 64GB of RAM. I have both the master and worker running on the server. I don't have dynamic allocation enabled. I am on Spark 2.0
What I dont understand is when I submit my job and specify:
--num-executors 2
--executor-cores 2
Only 4 cores should be taken up. Yet when the job is submitted, it takes all 16 cores and spins up 8 executors regardless, bypassing the num-executors parameter. But if I change the executor-cores parameter to 4 it will adjust accordingly and 4 executors will spin up.
Disclaimer: I really don't know if --num-executors should work or not in standalone mode. I haven't seen it used outside YARN.
Note: As pointed out by Marco --num-executors is no longer in use on YARN.
You can effectively control number of executors in standalone mode with static allocation (this works on Mesos as well) by combining spark.cores.max and spark.executor.cores where number of executors is determined as:
floor(spark.cores.max / spark.executor.cores)
For example:
--conf "spark.cores.max=4" --conf "spark.executor.cores=2"

Why does vcore always equal the number of nodes in Spark on YARN?

I have a Hadoop cluster with 5 nodes, each of which has 12 cores with 32GB memory. I use YARN as MapReduce framework, so I have the following settings with YARN:
yarn.nodemanager.resource.cpu-vcores=10
yarn.nodemanager.resource.memory-mb=26100
Then the cluster metrics shown on my YARN cluster page (http://myhost:8088/cluster/apps) displayed that VCores Total is 40. This is pretty fine!
Then I installed Spark on top of it and use spark-shell in yarn-client mode.
I ran one Spark job with the following configuration:
--driver-memory 20480m
--executor-memory 20000m
--num-executors 4
--executor-cores 10
--conf spark.yarn.am.cores=2
--conf spark.yarn.executor.memoryOverhead=5600
I set --executor-cores as 10, --num-executors as 4, so logically, there should be totally 40 Vcores Used. However, when I check the same YARN cluster page after the Spark job started running, there are only 4 Vcores Used, and 4 Vcores Total
I also found that there is a parameter in capacity-scheduler.xml - called yarn.scheduler.capacity.resource-calculator:
"The ResourceCalculator implementation to be used to compare Resources in the scheduler. The default i.e. DefaultResourceCalculator only uses Memory while DominantResourceCalculator uses dominant-resource to compare multi-dimensional resources such as Memory, CPU etc."
I then changed that value to DominantResourceCalculator.
But then when I restarted YARN and run the same Spark application, I still got the same result, say the cluster metrics still told that VCores used is 4! I also checked the CPU and memory usage on each node with htop command, I found that none of the nodes had all 10 CPU cores fully occupied. What can be the reason?
I tried also to run the same Spark job in fine-grained way, say with --num executors 40 --executor-cores 1, in this ways I checked again the CPU status on each worker node, and all CPU cores are fully occupied.
I was wondering the same but changing the resource-calculator worked for me.This is how I set the property:
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>
Check in the YARN UI in the application how many containers and vcores are assigned, with the change the number of containers should be executors+1 and the vcores should be: (executor-cores*num-executors) +1.
Without setting the YARN scheduler to FairScheduler, I saw the same thing. The Spark UI showed the right number of tasks, though, suggesting nothing was wrong. My cluster showed close to 100% CPU usage, which confirmed this.
After setting FairScheduler, the YARN Resources looked correct.
Executors take 10 cores each, 2 cores for Application Master = 42 Cores requested when you have 40 vCores total.
Reduce executor cores to 8 and make sure to restart each NodeManager
Also modify yarn-site.xml and set these properties:
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.minimum-allocation-vcores
yarn.scheduler.maximum-allocation-vcores

Using all resources in Apache Spark with Yarn

I am using Apache Spark with Yarn client.
I have 4 worker PCs with 8 vcpus each and 30 GB of ram in my spark cluster.
Im set my executor memory to 2G and number of instances to 33.
My job is taking 10 hours to run and all machines are about 80% idle.
I dont understand the correlation between executor memory and executor instances. Should I have an instance per Vcpu? Should I set the executor memory to be memory of machine/#executors per machine?
I believe that you have to use the following command:
spark-submit --num-executors 4 --executor-memory 7G --driver-memory 2G --executor-cores 8 --class \"YourClassName\" --master yarn-client
Number of executors should be 4, since you have 4 workers. The executor memory should be close to the maximum memory that each yarn node has allocated, roughly ~5-6GB (I assume you have 30GB total RAM).
You should take a look on the spark-submit parameters and fully understand them.
We were using cassandra as our data source for spark. The problem was there were not enough partitions. We needed to split up the data more. Our mapping for # of cassandra partitions to spark partitions was not small enough and we would only generate 10 or 20 tasks instead of 100s of tasks.

Resources