I do not understand this option. It seems like its the maximum number of executors.
If there is not enough memory on the nodes in the cluster this number is doing nothing and there are fewer executors than I'm asking for.
Can someone clarify this please?
The option --num-executors is used after we calculate the number of executors our infrastructure supports from the available memory on the worker nodes.
The calculation can be performed as stated here. This wuill let you know the number of executors supported by your hadoop infrastructure or your the queue that has been assigned to your team.
Related
What happens when number of spark tasks be greater than the executor core? How is this scenario handled by Spark
Is this related to this question?
Anyway, you can check this Cloudera How-to. In "Tuning Resource Allocation" section, It's explained that a spark application can request executors by turning on the dynamic allocation property. It's also important to set cluster properties such as num-executors, executor-cores, executor-memory... so that spark requests fit into what your resource manager has available.
yes, this scenario can happen. In this case some of the cores will be idle. Scenarios where this can happen:
You call coalesce or repartition with a number of partitions < number of cores
you use the default number of spark.sql.shuffle.partitions (=200)
and you have more than 200 cores available. This will be an issue for
joins, sorting and aggregation. In this case you may want to increase spark.sql.shuffle.partitions
Note that even if you have enough tasks, some (or most) of them could be empty. This can happen if you have a large data skew or you do something like groupBy() or Window without a partitionBy. In this case empty partitions will be finished immediately, turning most of your cores idle
I think the question is a little off beam. It is unlikely what you ask. Why?
With a lot of data you will have many partitions and you may repartition.
Say you have 10,000 partitions which equates to 10,000 tasks.
An executor (core) will serve a partition effectively a task (1:1 mapping) and when finished move on to the next task, until all tasks finished in the stage and then next will start (if it is in plan / DAG).
It's more likely you will not have a cluster of 10,000 executor cores at most places (for your App), but there are sites that have that, that is true.
If you have more cores allocated than needed, then they remain idle and non-usable for others. But with dynamic resource allocation, executors can be relinquished. I have worked with YARN and Spark Standalone, how this is with K8 I am not sure.
Transformations alter what you need in terms of resources. E.g. an order by may result in less partitions and thus may contribute to idleness.
What happens when number of spark tasks be greater than the executor core? How is this scenario handled by Spark
Is this related to this question?
Anyway, you can check this Cloudera How-to. In "Tuning Resource Allocation" section, It's explained that a spark application can request executors by turning on the dynamic allocation property. It's also important to set cluster properties such as num-executors, executor-cores, executor-memory... so that spark requests fit into what your resource manager has available.
yes, this scenario can happen. In this case some of the cores will be idle. Scenarios where this can happen:
You call coalesce or repartition with a number of partitions < number of cores
you use the default number of spark.sql.shuffle.partitions (=200)
and you have more than 200 cores available. This will be an issue for
joins, sorting and aggregation. In this case you may want to increase spark.sql.shuffle.partitions
Note that even if you have enough tasks, some (or most) of them could be empty. This can happen if you have a large data skew or you do something like groupBy() or Window without a partitionBy. In this case empty partitions will be finished immediately, turning most of your cores idle
I think the question is a little off beam. It is unlikely what you ask. Why?
With a lot of data you will have many partitions and you may repartition.
Say you have 10,000 partitions which equates to 10,000 tasks.
An executor (core) will serve a partition effectively a task (1:1 mapping) and when finished move on to the next task, until all tasks finished in the stage and then next will start (if it is in plan / DAG).
It's more likely you will not have a cluster of 10,000 executor cores at most places (for your App), but there are sites that have that, that is true.
If you have more cores allocated than needed, then they remain idle and non-usable for others. But with dynamic resource allocation, executors can be relinquished. I have worked with YARN and Spark Standalone, how this is with K8 I am not sure.
Transformations alter what you need in terms of resources. E.g. an order by may result in less partitions and thus may contribute to idleness.
Please bear with me because I am still quite new to Spark.
I have a GCP DataProc cluster which I am using to run a large number of Spark jobs, 5 at a time.
Cluster is 1 + 16, 8 cores / 40gb mem / 1TB storage per node.
Now I might be misunderstanding something or not doing something correctly, but I currently have 5 jobs running at once, and the Spark UI is showing that only 34/128 vcores are in use, and they do not appear to be evenly distributed (The jobs were executed simultaneously, but the distribution is 2/7/7/11/7. There is only one core allocated per running container.
I have used the flags --executor-cores 4 and --num-executors 6 which doesn't seem to have made any difference.
Can anyone offer some insight/resources as to how I can fine tune these jobs to use all available resources?
I have managed to solve the issue - I had no cap on the memory usage so it looked as though all memory was allocated to just 2 cores per node.
I added the property spark.executor.memory=4G and re-ran the job, it instantly allocated 92 cores.
Hope this helps someone else!
The Dataproc default configurations should take care of the number of executors. Dataproc also enables dynamic allocation, so executors will only be allocated if needed (according to Spark).
Spark cannot parallelize beyond the number of partitions in a Dataset/RDD. You may need to set the following properties to get good cluster utilization:
spark.default.parallelism: the default number of output partitions from transformations on RDDs (when not explicitly set)
spark.sql.shuffle.partitions: the number of output partitions from aggregations using the SQL API
Depending on your use case, it may make sense to explicitly set partition counts for each operation.
TL;DR
Spark UI shows different number of cores and memory than what I'm asking it when using spark-submit
more details:
I'm running Spark 1.6 in standalone mode.
When I run spark-submit I pass it 1 executor instance with 1 core for the executor and also 1 core for the driver.
What I would expect to happen is that my application will be ran with 2 cores total.
When I check the environment tab on the UI I see that it received the correct parameters I gave it, however it still seems like its using a different number of cores. You can see it here:
This is my spark-defaults.conf that I'm using:
spark.executor.memory 5g
spark.executor.cores 1
spark.executor.instances 1
spark.driver.cores 1
Checking the environment tab on the Spark UI shows that these are indeed the received parameters but the UI still shows something else
Does anyone have any idea on what might cause Spark to use different number of cores than what I want I pass it? I obviously tried googling it but didn't find anything useful on that topic
Thanks in advance
TL;DR
Use spark.cores.max instead to define the total number of cores available, and thus limit the number of executors.
In standalone mode, a greedy strategy is used and as many executors will be created as there are cores and memory available on your worker.
In your case, you specified 1 core and 5GB of memory per executor.
The following will be calculated by Spark :
As there are 8 cores available, it will try to create 8 executors.
However, as there is only 30GB of memory available, it can only create 6 executors : each executor will have 5GB of memory, which adds up to 30GB.
Therefore, 6 executors will be created, and a total of 6 cores will be used with 30GB of memory.
Spark basically fulfilled what you asked for. In order to achieve what you want, you can make use of the spark.cores.max option documented here and specify the exact number of cores you need.
A few side-notes :
spark.executor.instances is a YARN-only configuration
spark.driver.memory defaults to 1 core already
I am also working on easing the notion of the number of executors in standalone mode, this might get integrated into a next release of Spark and hopefully help figuring out exactly the number of executors you are going to have, without having to calculate it on the go.
I have a Hadoop cluster of 5 nodes where Spark runs in yarn-client mode.
I use --num-executors for the number of executors. The maximum number of executors I am able to get is 20. Even if I specify more, I get only 20 executors.
Is there any upper limit on the number of executors that can get allocated ? Is it a configuration or the decision is made on the basis of the resources available ?
Apparently your 20 running executors consume all available memory. You can try decreasing Executor memory with spark.executor.memory parameter, which should leave a bit more place for other executors to spawn.
Also, are you sure that you correctly set the executors number? You can verify your environment settings from Spark UI view by looking at the spark.executor.instances value in the Environment tab.
EDIT: As Mateusz Dymczyk pointed out in comments, limited number of executors may not only be caused by overused RAM memory, but also by CPU cores. In both cases the limit comes from the resource manager.