Difference between executor and container in Spark - apache-spark

I am trying to clearly understand how memory allocation happens in a yarn managed cluster. I understand that there are a bunch of executors (one executor having its own JVM) and one executor can have one or more vcores during execution.
I am trying to tie up this understand in YARN configuration where things are segregated as Containers. Each container is actually a mix of some Vcores and fraction of heap memory.
Can someone confirm if one executor gets one container or one executor can have more than one containers. I read some documentation on Cloudera on YARN memory management and it appears to be saying that Container has an Executor allocated to it.
Cloudera Memory Management

Spark Executor runs within a Yarn Container, not across Containers.
A Yarn Container is provided by the YARN Resource Manager on demand - at start of Spark Application of via YARN Dynamic Resource Allocation.
A Yarn Container can have only one Spark Executor, but 1 or indeed more Cores can be assigned to the Executor.
Each Spark Executor and Driver runs as part of its own YARN Container.
Executors run on a given Worker.
Moreover, everything is in the context of an Application and as such an Application has Executors on many Workers.

When running Spark on YARN, each Spark executor runs as a YARN container.

Related

Does executors in spark nd application master in yarn do the same job?

Does executors in spark nd application master in yarn do the same
In Spark, there is a Driver and Executors. I'm not gonna go into detail of what driver and executors are but in a one-liner, the driver manages the job flow and schedules tasks, and Executors are worker nodes processes in charge of running individual tasks.
YARN is basically a resource manager which allocates memory to compute engines. Now, this compute engine can be Spark/Tez/Map-reduce. What you need to understand here is when YARN successfully allocates memory they are called containers.
Now when Spark Job is deployed in YARN, Assuming that YARN has sufficient memory for the spark job to run, Yarn first allocates resources as containers for Spark Application Master which will have the driver program (in case of cluster mode). This Application Master will further requests resources for spark executors which YARN will further allocate as containers. So spark job will have multiple containers, one for the driver program and n containers for n executors. So you see in the computing sense the fundamental difference between spark running in a spark cluster and spark running in YARN is the use of containers.
So executors and application master in YARN run inside containers and do the same thing as spark on spark clusters.

Does Yarn allocates one container for the application master from the number of executors that we pass in our spark-submit command

Lets assume that I am submitting a Spark application in yarn-client mode . In Spark submit I am passing the --num-executors as 10 . When the client submits this spark application to resourceManager,
Does resource manager allocate one executor container for Application master process from the --num-executors(10) and teh rest 9 will be given for actual executors ?
or
Does it allocate one new container for application master or give 10 containers for executors alone ?
--num-executors is to request that number of executors from a cluster manager (that may also be Hadoop YARN). That's Spark's requirement.
An application master (of a YARN application) is just a thing of YARN.
It may happen that a Spark application can also be a YARN application. In such case, the Spark application gets 10 containers and one extra container for the AM.

How does spark choose nodes to run executors?(spark on yarn)

How does spark choose nodes to run executors?(spark on yarn)
We use spark on yarn mode, with a cluster of 120 nodes.
Yesterday one spark job create 200 executors, while 11 executors on node1,
10 executors on node2, and other executors distributed equally on the other nodes.
Since there are so many executors on node1 and node2, the job run slowly.
How does spark select the node to run executors?
according to yarn resourceManager?
As you mentioned Spark on Yarn:
Yarn Services choose executor nodes for spark job based on the availability of the cluster resource. Please check queue system and dynamic allocation of Yarn. the best documentation https://blog.cloudera.com/blog/2016/01/untangling-apache-hadoop-yarn-part-3/
Cluster Manager allocates resources across the other applications.
I think the issue is with bad optimized configuration. You need to configure Spark on the Dynamic Allocation. In this case Spark will analyze cluster resources and add changes to optimize work.
You can find all information about Spark resource allocation and how to configure it here: http://site.clairvoyantsoft.com/understanding-resource-allocation-configurations-spark-application/
Are all 120 nodes having identical capacity?
Moreover the jobs will be submitted to a suitable node manager based on the health and resource availability of the node manager.
To optimise spark job, You can use dynamic resource allocation, where you do not need to define the number of executors required for running a job. By default it runs the application with the configured minimum cpu and memory. Later it acquires resource from the cluster for executing tasks. It will release the resources to the cluster manager once the job has completed and if the job is idle up to the configured idle timeout value. It reclaims the resources from the cluster once it starts again.

what is the relationship between spark executor and yarn container when using spark on yarn

what is the relationship between spark executor and yarn container when using spark on yarn?
For example, when I set executor-memory = 20G and yarn container memory = 10G, does 1 executor contains 2 containers?
Spark Executor Runs within a Yarn Container. A Yarn Container is provided by Resource Manager on demand. A Yarn container can have 1 or more Spark Executors.
Spark-Executors are the one which runs the Tasks.
Spark Executor will be started on a Worker Node(DataNode)
In your case when you set executor-memory = 20G -> This means you are asking for a Container of size 20GB in which your Executors will be running. Now you might have 1 or more Executors using this 20GB of Memory and this is Per Worker Node.
So for example if u have a Cluster to 8 nodes, it will be 8 * 20 GB of Total Memory for your Job.
Below are the 3 config options available in yarn-site.xml with which you can play around and see the differences.
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
yarn.nodemanager.resource.memory-mb
When running Spark on YARN, each Spark executor runs as a YARN container, This means the number of containers will always be the same as the executors created by a Spark application e.g. via --num-executors parameter in spark-submit.
https://stackoverflow.com/a/38348175/9605741
In YARN mode, each executor runs in one container. The number of executors is the same as the number of containers allocated from YARN(except in cluster mode, which will allocate another container to run the driver).

Each application submitted by client can launch how many YARN container in each Node Manager?

A container is an abstract notion in YARN. When running Spark on YARN, each Spark executor runs as a YARN container. How many YARN containers can be launched in each Node Manager, by each client-submitted application?
You can run as many executors on a single NodeManager as you want, so long as you have the resources. If you have a server with 20gb RAM and 10 cores, you can run 10 2gb 1core executors on that nodemanager. It wouldn't be advisable to run multiple executors on the same nodemanager as there is overhead cost in shuffling data between executors, even if they process is running on the same machine.
Each executor runs in a YARN container.
Depending on how big your YARN cluster is, how your data is spread out among the worker nodes to have better data locality, how many executors you requested for your application, how much resource(cores per executor, memory per executor) you requested per executor and whether your have enabled dynamic resource allocation, Spark decides on how many executors are needed in total and, how many executors to launch per worker nodes.
If you requested for resource that YARN cluster could not accommodate, your requested will be rejected.
Following are the properties to look out for when making spark-submit request.
--num-executors - number of total executors you need
--executor-cores - number of cores per executor. Max 5 is recommended.
--executor-memory - amount of memory per executor.
--spark.dynamicAllocation.enabled
-- spark.dynamicAllocation.maxExecutors

Resources