Max possible number of executors in cluster - apache-spark

Let's say I have 5 worker nodes in a cluster and each node has 48 cores and 256 GB RAM.
Then what are the maximum number of executors possible in the clusters?
will cluster have 5*48 = 240 executors or only 5 executors?
Or there are some other factors that will decide the number of executors in a cluster, then what are they?
Thanks.

The number of executors is related to the amount of parallelism your application need. You can create 5*48 executor with 1 core each, but there's others processes that should be considered, like memory overhead, cluster management process, scheduler, so you may need to reserve 2-5 cores/node to management processes.
I don't know what architecture your cluster use, but this article is a good start if you are using hadoop , https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html

Related

Spark: How to tune memory / cores given my cluster?

There are several threads with significant votes that I am having difficulty interpreting, perhaps due to jargon of 2016 being different of that today? (or I am just not getting it, too)
Apache Spark: The number of cores vs. the number of executors
How to tune spark executor number, cores and executor memory?
Azure/Databricks offers some best practices on cluster sizing: https://learn.microsoft.com/en-us/azure/databricks/clusters/cluster-config-best-practices
So for my workload, lets say I am interested in (using Databricks current jargon):
1 Driver: Comprised of 64gb of memory and 8 cores
1 Worker: Comprised of 256gb of memory and 64 cores
Drawing on the above Microsoft link, fewer workers should in turn lead to less shuffle; among the most costly Spark operations.
So, I have 1 driver and 1 worker. How, then, do I translate these terms into what is discussed here on SO in terms of "nodes" and "executors".
Ultimately, I would like to set my Spark config "correctly" such that cores and memory are, as optimized as possible.

In Spark, is it better to have many small workers or few bigger workers

A Spark cluster consists of a driver that distributes tasks to multiple worker nodes. Each worker can take up a number of tasks equal to the amount of cores available. So I'd think that the speed at which a task finishes depends on the total available cores.
Consider the following cluster configurations, using AWS EC2 as an example:
2 m5.4xlarge (16 vCPU/cores, 64GB RAM) workers for a total of 32 cores / 128GB RAM
OR
8 m5.xlarge (4 vCPU/cores, 16GB RAM) workers for a total of 32 cores / 128GB RAM
I'm using those instances as an example; it's not about those instances specifically but about the general idea that you can have the same total amount of cores + RAM with different configurations. Would there be any difference between the performance of those two cluster configurations? Both would have the same total amount of cores and RAM, and same ratio of RAM/core. For what kind of job would you choose one and for what the other? Some thoughts I have on this myself:
The configuration with 8 smaller instances might have a higher total network bandwidth since each workers has it's own connection
The configuration with 2 bigger instances might be more efficient when shuffling, since more cores can share the memory on a worker instead of having to shuffle across the network, so lower network overhead
The configuration with 8 smaller instances has better resiliency, since if one worker fails it's only one out of eight failing rather than one out of two.
Do you agree with the statements above? What other considerations would you make when choosing between different configurations with equal amount of total RAM / cores?

In spark, can I define more executors than available cores?

Say I have a total of 4 cores,
What happens if I define num of executors as 8..
Can we share a single core among 2 executors ?
Can the num of cores for a executor be a fraction ?
What is the impact on performance with this kind of config.
This is what I observed in spark standalone mode:
The total cores of my system are 4
if I execute spark-shell command with spark.executor.cores=2
Then 2 executors will be created with 2 core each.
But if I configure the no of executors more than available cores,
Then only one executor will be created, with the max core of the system.
The number of the core will never be of fraction value.
If you assign fraction value in the configuration, you will end up with exception:
Feel free to edit/correct the post if anything is wrong.

Where does spark job run in a cluster of 2 nodes, but the spark submit configurations can easily accommodate in a single node? (cluster mode)

spark cluster has 2 worker nodes.
Node 1: 64 GB, 8 cores.
Node 2: 64 GB, 8 cores.
Now if i submit a spark job using spark-submit in cluster mode with
2 executors and each executor memory as 32 GB, 4 cores/executor.
Now my question is, as the above configuration can be accommodated in a single node itself, will spark run it using 2 worker nodes or just in one node?
Also, if a configuration doesn't have a multiple of cores as the executors then how many cores allocated for each executor?
Example: if num of cores in a node available after excluding one core for yarn deamon are 7. since 2 nodes, 2*7=14 (total cores available)and as HDFS give good throughput if num of cores per executor were 5..
Now 14/5 to find the num of executors. should i consider 14/5 as 2 or 3 exeutors? then how these cores are equally distributed?
It is more of a resource manager question then a Spark question, but in your case the 2 executors cant run in a single machine cause the OS has an overhead that uses at least 1 core and 1GB RAM , even if you will set the ram to 30 GB and 3 cores/executor. they will run on different nodes because Spark tries to get the best data locality it can so obviously it wont use the same node for 2 executors.

What happens if I allocate all the available cores on the server for spark cluster

As is well known, It is possible to increase the number of cores when submitting our application. Actually, I'm trying to allocate all available cores on the server for the Spark application. I'm wondering what will happen to the performance? will it reduce or be better than usual?
The first thing about allocating cores (--executor-cores) might come in mind that more cores in an executor means more parallelism, more tasks will be executed concurrently, better performance. But it's not true for spark ecosystem. After leaving 1 core for os and other application running in the worker, Study has shown that it's optimal to allocate 5 cores for each executor.
For example, if you have a worker node with 16 cores, the optimal total executors and cores per executor will be --num-executors 3 and --executor-cores 5 (as 5*3=15) respectively.
Not only optimal resource allocation brings better performance, it also depends on how the transformations and actions are done on dataframes. More shuffling of data between different executors hampers in performance.
your operating system always need resource for its bare need.
It good to keep 1 core and 1 GB memory for operating system and for other applications.
If you allocate all resource to spark then it will not going to improve your performance, your other applications starve for resources.
I think its not better idea to allocate all resources to spark only.
Follow below post if you want to tune your spark cluster
How to tune spark executor number, cores and executor memory?

Resources