spark executors running multiple copies of the same application - apache-spark

In the spark documentation it says,
Each application gets its own executor processes, which stay up for the duration of the whole application and run tasks in multiple threads. This has the benefit of isolating applications from each other, on both the scheduling side (each driver schedules its own tasks) and executor side (tasks from different applications run in different JVMs). However, it also means that data cannot be shared across different Spark applications (instances of SparkContext) without writing it to an external storage system.
From the phrase
... executor side (tasks from different applications run in different
JVMs)
Does this mean that
If you're running multiple copies(multiple spark-submits) of the same application in a cluster that has many executors, is it possible for an executor to run tasks that belong to different spark-submits in parallel?
If the above is possible, using singleton objects that are shared between tasks of an executor, can cause data collisions between different copies(different spark-submits) of the same application?

Each executor is separate JVM process, and only used for one application. No need to worry about data collision.

Related

How does spark.dynamicAllocation.enabled influence the order of jobs?

Need an understanding on when to use spark.dynamicAllocation.enabled - What are advantages and disadvantages of using it? I have queue where jobs get submitted.
9:30 AM --> Job A gets submitted with dynamicAllocation enabled.
10:30 AM --> Job B gets submitted with dynamicAllocation enabled.
Note: My Data is huge (processing will be done on 10GB data with transformations).
Which Job gets the preference on allocation of executors to Job A or Job B and how does the spark co-ordinates b/w 2 applications?
Dynamic Allocation of Executors is about resizing your pool of executors.
Quoting Dynamic Allocation:
spark.dynamicAllocation.enabled Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
And later on in Dynamic Resource Allocation:
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
In other words, job A will usually finish before job B will be executed. Spark jobs are usually executed sequentially, i.e. a job has to finish before another can start.
Usually...
SparkContext is thread-safe and can handle jobs from a Spark application. That means that you may submit jobs at the same time or one after another and in some configuration expect that these two jobs will run in parallel.
Quoting Scheduling Within an Application:
Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads. By “job”, in this section, we mean a Spark action (e.g. save, collect) and any tasks that need to run to evaluate that action. Spark’s scheduler is fully thread-safe and supports this use case to enable applications that serve multiple requests (e.g. queries for multiple users).
By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into “stages” (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc.
it is also possible to configure fair sharing between jobs. Under fair sharing, Spark assigns tasks between jobs in a “round robin” fashion, so that all jobs get a roughly equal share of cluster resources. This means that short jobs submitted while a long job is running can start receiving resources right away and still get good response times, without waiting for the long job to finish. This mode is best for multi-user settings.
Wrapping up...
Which Job gets the preference on allocation of executors to Job A or Job B and how does the spark co-ordinates b/w 2 applications?
Job A.
Unless you have enabled Fair Scheduler Pools:
The fair scheduler also supports grouping jobs into pools, and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a “high-priority” pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares.

If the task slots in one executor can be shared by different Spark application tasks?

I have read some literature about Spark task scheduling, and found some paper mentioned that the Executor is monopolized by only one application at every moment.
So I wandering about whether the task slots in one executor can be shared by different Spark applications at the same time?

Does master node execute actual tasks in Spark?

My question may sound silly, but it bothers me for a long time.
The picture shown above is the components of a distributed Spark application. I think this picture indicates that the master node will never execute actual tasks, but only is served as a cluster manager. Is it true?
By the way, the tasks here refers to the user-submit tasks.
Yes, the master node executes the driver process and does not run tasks. Tasks run in executor processes on the worker nodes. The master node is rarely stressed from a CPU standpoint but, depending on how broadcast variables, accumulators and collect are used, it may be quite stressed in terms of RAM usage.
To explain a bit more on the different roles:
The driver prepares the context and declares the operations on the data using RDD transformations and actions.
The driver submits the serialized RDD graph to the master. The master creates tasks out of it and submits them to the workers for execution. It coordinates the different job stages.
The workers is where the tasks are actually executed. They should have the resources and network connectivity required to execute the operations requested on the RDDs.

Does Spark's Fair Scheduler pool provides inter- or intra-application scheduling?

I am quite confused,because these pools are getting created for each spark application, and also if I keep minshare for a pool greater than the total cores of the cluster, the pool got created.
So if these pools are intra application do I need to, assign different pools to different spark jobs manually, because if I use sparkcontext.setlocalproperty for setting the pool, then all the stages of that application goes to that pool.
Point is that can we have jobs from two different application, to go in the same pool, so if I have application a1 and used sparkcontext.(pool,p1), and other application a2 and used sparkcontext.(pool,p1), would jobs for both applocation will go to the same pool p1 or p1 for a1 is different from p1 for a2.
As described in Spark's official documentation in Scheduling Within an Application:
Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads.
and later in the same document:
Starting in Spark 0.8, it is also possible to configure fair sharing between jobs. Under fair sharing, Spark assigns tasks between jobs in a “round robin” fashion, so that all jobs get a roughly equal share of cluster resources. This means that short jobs submitted while a long job is running can start receiving resources right away and still get good response times, without waiting for the long job to finish. This mode is best for multi-user settings.
With that, the scheduling happens within resources given to a Spark application and how much it gets depends on CPUs/vcores and memory available in a cluster manager.
The Fair Scheduler mode is essentially for Spark applications with parallel jobs.

What is the relationship between workers, worker instances, and executors?

In Spark Standalone mode, there are master and worker nodes.
Here are few questions:
Does 2 worker instance mean one worker node with 2 worker processes?
Does every worker instance hold an executor for specific application (which manages storage, task) or one worker node holds one executor?
Is there a flow chart explaining how spark works on runtime, such as word count?
Extending to other great answers, I would like to describe with few images.
In Spark Standalone mode, there are master node and worker nodes.
If we represent both master and workers(each worker can have multiple executors if CPU and memory are available) at one place for standalone mode.
If you are curious about how Spark works with YARN? check this post Spark on YARN
1. Does two worker instance mean one worker node with two worker processes?
In general, we call worker instance as a slave as it's a process to execute spark tasks/jobs. Suggested mapping for a node(a physical or virtual machine) and a worker is,
1 Node = 1 Worker process
2. Does every worker instance hold an executor for the specific application (which manages storage, task) or one worker node holds one executor?
Yes, A worker node can be holding multiple executors (processes) if it has sufficient CPU, Memory and Storage.
Check the Worker node in the given image.
BTW, the Number of executors in a worker node at a given point of time entirely depends on workload on the cluster and capability of the node to run how many executors.
3. Is there a flow chart explaining how spark runtime?
If we look at the execution from Spark perspective over any resource manager for a program, which join two rdds and do some reduce operation then filter
HIH
I suggest reading the Spark cluster docs first, but even more so this Cloudera blog post explaining these modes.
Your first question depends on what you mean by 'instances'. A node is a machine, and there's not a good reason to run more than one worker per machine. So two worker nodes typically means two machines, each a Spark worker.
Workers hold many executors, for many applications. One application has executors on many workers.
Your third question is not clear.
I know this is an old question and Sean's answer was excellent. My writeup is about the SPARK_WORKER_INSTANCES in MrQuestion's comment. If you use Mesos or YARN as your cluster manager, you are able to run multiple executors on the same machine with one worker, thus there is really no need to run multiple workers per machine. However, if you use standalone cluster manager, currently it still only allows one executor per worker process on each physical machine. Thus in case you have a super large machine and would like to run multiple exectuors on it, you have to start more than 1 worker process. That's what SPARK_WORKER_INSTANCES in the spark-env.sh is for. The default value is 1. If you do use this setting, make sure you set SPARK_WORKER_CORES explicitly to limit the cores per worker, or else each worker will try to use all the cores.
This standalone cluster manager limitation should go away soon. According to this SPARK-1706, this issue will be fixed and released in Spark 1.4.
As Lan was saying, the use of multiple worker instances is only relevant in standalone mode. There are two reasons why you want to have multiple instances: (1) garbage pauses collector can hurt throughput for large JVMs (2) Heap size of >32 GB can’t use CompressedOoops
Read more about how to set up multiple worker instances.

Resources