spark Dynamic Resource Allocation in a standalone - apache-spark

I have a question/problem regarding dynamic resource allocation.
I am using spark 1.6.2 with stand alone cluster manager.
I have one worker with 2 cores.
I set the the folllowing arguments in the spark-defaults.conf file on all my nodes:
spark.dynamicAllocation.enabled true
spark.shuffle.service.enabled true
spark.deploy.defaultCores 1
I run a sample application with many tasks.
I open port 4040 on the driver and i can verify that the above configuration exists.
My problem is that no matter what i do my application only gets 1 core even though the other core is available.
Is this normal or do i have a problem in my configuration?
The behaviour i want to get is this:
I have many users working with the same spark cluster.
I want that each application will get a fixed number of cores unless the rest of the clutser is pending.
In this case I want that the running applications will get the total amount of cores until a new application arrives...
Do I have to go to mesos for this?

Related

How to start multiple spark workers on one machine in Spark 2.4?

I am trying to setup a small spark cluster on my local Mac machine, one master and two or more workers. In Spark 2.0.0 doc there is a property SPARK_WORKER_INSTANCES which states
Number of worker instances to run on each machine (default: 1). You
can make this more than 1 if you have very large machines and
would like multiple Spark worker processes. If you do set this, make
sure to also set SPARK_WORKER_CORES explicitly to limit the cores per
worker, or else each worker will try to use all the cores.
However, this same property is missing from Spark 2.4
The link Spark 2.0.0 doc you provided is pointing to 2.0.0-preview and not to 2.0.0 in which the property is also missing.
It was removed from the documentation as per this Jira issue SPARK-15781 and the corresponding Github PR:
Like SPARK_JAVA_OPTS and SPARK_CLASSPATH, we will remove the document
for SPARK_WORKER_INSTANCES to discourage user not to use them. If they
are actually used, SparkConf will show a warning message as before.
You can also read this from the migration guide Upgrading from Core 2.4 to 3.0:
SPARK_WORKER_INSTANCES is deprecated in Standalone mode. It’s
recommended to launch multiple executors in one worker and launch one
worker per node instead of launching multiple workers per node and
launching one executor per worker.
And will be removed in future versions : Remove multiple workers on the same host support from Standalone backend
I think its main purpose was for testing spark on laptops but I wasn't able to find a doc that confirms this, as in practice, it makes no sense to have multiple workers per node.

How SPARK_WORKER_CORES setting impacts concurrency in Spark Standalone

I am using a Spark 2.2.0 cluster configured in Standalone mode. Cluster has 2 octa core machines. This cluster is exclusively for Spark jobs and no other process uses them. I have around 8 Spark Streaming apps which run on this cluster.I explicitly set SPARK_WORKER_CORES (in spark-env.sh) to 8 and allocate one core to each app using total-executor-cores setting. This config reduces the capability to work in parallel on multiple tasks. If a stage works on a partitioned RDD with 200 partitions, only one task executes at a time. What I wanted Spark to do was to start separate thread for each job and process in parallel. But I couldn't find a separate Spark setting to control the number of threads.So, I decided to play around and bloated the number of cores (i.e. SPARK_WORKER_CORES in spark-env.sh) to 1000 on each machine. Then I gave 100 cores to each Spark application. I found that spark started processing 100 partitons in parallel this time indicating that 100 threads were being used.I am not sure if this is the correct method of impacting the number of threads used by a Spark job.
You mixed up two things:
Cluster manger properties - SPARK_WORKER_CORES - total number of cores that worker can offer. Use it to control a fraction of resources that should be used by Spark in total
Application properties --total-executor-cores / spark.cores.max - number of cores that application requests from the cluster manager. Use it control in-app parallelism.
Only the second on is directly responsible for app parallelism as long as, the first one is not limiting.
Also CORE in Spark is a synonym of thread. If you:
allocate one core to each app using total-executor-cores setting.
then you specifically assign a single data processing thread.

How does spark dynamic resource allocation work on YARN (with regards to NodeManagers)?

Let's assume that I have 4 NM and I have configured spark in yarn-client mode. Then, I set dynamic allocation to true to automatically add or remove a executor based on workload. If I understand correctly, each Spark executor runs as a Yarn container.
So, if I add more NM will the number of executors increase ?
If I remove a NM while a Spark application is running, something will happen to that application?
Can I add/remove executors based on other metrics ? If the answer is yes, there is a function, preferably in python,that does that ?
If I understand correctly, each Spark executor runs as a Yarn container.
Yes. That's how it happens for any application deployed to YARN, Spark including. Spark is not in any way special to YARN.
So, if I add more NM will the number of executors increase ?
No. There's no relationship between the number of YARN NodeManagers and Spark's executors.
From Dynamic Resource Allocation:
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand.
As you may have guessed correctly by now, it is irrelevant how many NMs you have in your cluster and it's by the workload when Spark decides whether to request new executors or remove some.
If I remove a NM while a Spark application is running, something will happen to that application?
Yes, but only when Spark uses that NM for executors. After all, NodeManager gives resources (CPU and memory) to a YARN cluster manager that will in turn give them to applications like Spark applications. If you take them back, say by shutting the node down, the resource won't be available anymore and the process of a Spark executor simply dies (as any other process with no resources to run).
Can I add/remove executors based on other metrics ?
Yes, but usually it's Spark job (no pun intended) to do the calculation and requesting new executors.
You can use SparkContext to manage executors using killExecutors, requestExecutors and requestTotalExecutors methods.
killExecutor(executorId: String): Boolean Request that the cluster manager kill the specified executor.
requestExecutors(numAdditionalExecutors: Int): Boolean Request an additional number of executors from the cluster manager.
requestTotalExecutors(numExecutors: Int, localityAwareTasks: Int, hostToLocalTaskCount: Map[String, Int]): Boolean Update the cluster manager on our scheduling needs.

Spark resource scheduling - Standalone cluster manager

I have pretty low configuration testing machine for my data pipelines developed in Spark. I will use only one AWS t2.large instance, which has only 2 CPUs and 8 GB of RAM.
I need to run 2 spark streaming jobs, as well as leave some memory and CPU power for occasionally testing batch jobs.
So I have master and one worker, which are on the same machine.
I have some general questions:
1) How many executors can run per one worker? I know that default is one, but does it make sense to change this?
2) Can one executor execute multiple applications, or one executor is dedicated only to one application?
3) Is a way to make this work, to set memory that application can use in configuration file, or when I create spark context?
Thank you
How many executors can run per one worker? I know that default is one, but does it make sense to change this?
It makes sense only in case you have enough resources. Say, on a machine with 24 GB and 12 cores it's possible to run 3 executors if you're sure that 8 GB is enough for one executor.
Can one executor execute multiple applications, or one executor is dedicated only to one application?
Nope, every application starts their own executors.
Is a way to make this work, to set memory that application can use in configuration file, or when I create spark context?
I'm not sure I understand the question, but there are 3 ways to provide configuration for applications
file spark-defaults.conf, but don't forget to turn on to read default properties, when you create new SparkConf instance.
providing system properties through -D, when you run the application or --conf if that's spark-submit or spark-shell. Although for memory options there are specific parameters like spark.executor.memory or spark.driver.memory and others to be used.
provides the same options through new SparkConf instance using its set methods.

Resource allocation with Apache Spark and Mesos

So I've deployed a cluster with Apache Mesos and Apache Spark and I've several jobs that I need to execute on the cluster. I would like to be sure that a job has enough resources to be successfully executed, and if it is not the case, it must return an error.
Spark provides several settings like spark.cores.max and spark.executor.memory in order to limit the resources used by the job, but there is no settings for the lower bound (e.g. set the minimal number of core to 8 for the job).
I'm looking for a way to be sure that a job has enough resources before it is executed (during the resource allocation for instance), do you know if it is possible to get this information with Apache Spark on Mesos?

Resources