Limits or Controls on Spark Concurrent Applications - apache-spark

I have a spark cluster managed by YARN with 200GB RAM and 72 vCPUs.
And I have a number of small pyspark applications that perform Spark Streaming tasks. These applications are long-running, with each micro-batch running between 1 - 30min.
However, I can only run 7 applications stably. When I tried to run the 8th application, all jobs would restart frequently.
When 8 jobs are running, resource consumption is only 120 GB and about 30 CPUs.
May I understand why jobs could be instable although there are huge memories (80GB) left?
BTW, there are about 16GB RAM configured for OS.

Related

How SPARK_WORKER_CORES setting impacts concurrency in Spark Standalone

I am using a Spark 2.2.0 cluster configured in Standalone mode. Cluster has 2 octa core machines. This cluster is exclusively for Spark jobs and no other process uses them. I have around 8 Spark Streaming apps which run on this cluster.I explicitly set SPARK_WORKER_CORES (in spark-env.sh) to 8 and allocate one core to each app using total-executor-cores setting. This config reduces the capability to work in parallel on multiple tasks. If a stage works on a partitioned RDD with 200 partitions, only one task executes at a time. What I wanted Spark to do was to start separate thread for each job and process in parallel. But I couldn't find a separate Spark setting to control the number of threads.So, I decided to play around and bloated the number of cores (i.e. SPARK_WORKER_CORES in spark-env.sh) to 1000 on each machine. Then I gave 100 cores to each Spark application. I found that spark started processing 100 partitons in parallel this time indicating that 100 threads were being used.I am not sure if this is the correct method of impacting the number of threads used by a Spark job.
You mixed up two things:
Cluster manger properties - SPARK_WORKER_CORES - total number of cores that worker can offer. Use it to control a fraction of resources that should be used by Spark in total
Application properties --total-executor-cores / spark.cores.max - number of cores that application requests from the cluster manager. Use it control in-app parallelism.
Only the second on is directly responsible for app parallelism as long as, the first one is not limiting.
Also CORE in Spark is a synonym of thread. If you:
allocate one core to each app using total-executor-cores setting.
then you specifically assign a single data processing thread.

Why is the number of cores for driver and executors on YARN different from the number requested?

I deploy a spark job in the cluster mode with the following
Driver core - 1
Executor cores - 2
Number of executors - 2.
My understanding is that this application should occupy 5 cores in the cluster (4 executor cores and 1 driver core) but i dont observe this in the RM and Spark UIs.
On the resource manager UI, i see only 4 cores used for this application.
Even in Spark UI (on click of ApplicationMaster URL from RM), under the executors tab, the driver cores is shown as zero.
Am i missing something?
The cluster manager is YARN.
My understanding is that this application should occupy 5 cores in the cluster (4 executor cores and 1 driver core)
That's the perfect situation in YARN where it could give you 5 cores off the CPUs it manages.
but i dont observe this in the RM and Spark UIs.
Since the perfect situation does not occur often something it's nice to have as many cores as we could get from YARN so the Spark application could ever start.
Spark could just wait indefinitely for the requested cores, but that could not always be to your liking, could it?
That's why Spark on YARN has an extra check (aka minRegisteredRatio) that's the minimum of 80% of cores requested before the application starts executing tasks. You can use spark.scheduler.minRegisteredResourcesRatio Spark property to control the ratio. That would explain why you see less cores in use than requested.
Quoting the official Spark documentation (highlighting mine):
spark.scheduler.minRegisteredResourcesRatio
0.8 for YARN mode
The minimum ratio of registered resources (registered resources / total expected resources) (resources are executors in yarn mode, CPU cores in standalone mode and Mesos coarsed-grained mode ['spark.cores.max' value is total expected resources for Mesos coarse-grained mode] ) to wait for before scheduling begins. Specified as a double between 0.0 and 1.0. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by config spark.scheduler.maxRegisteredResourcesWaitingTime.

DC/OS SMACK cluster resource management

I am trying to set DC/OS Spark-Kafka-Cassandra cluster using 1 master and 3 private AWS m3.xlarge instances (each having 4 processors, 15GB RAM).
I have questions regarding some strange behaviour I have incurred in the spike I did several days ago.
On each of the private nodes I have following fixed resources reserved (I speak about CPU usage, memory is not the issue)
0.5 CPUs for Cassandra on each node
0.3 - 0.5 CPUs for Kafka one each node
0.5 CPUs is the Mesos overhead (I simply see in DC/OS UI that it is occupied 0.5CPUs more than the summation of all the services that are running on a node -> this probably belongs to some sort of Mesos overhead)
rest of the resources I have available for running Spark jobs (around 2.5 CPUs)
Now, I want to run 2 streaming jobs, so that they run on every node of the cluster. This requires me to set in dcos spark run command that number of executors is 3 (although I have 3 nodes in the cluster), as well as that number of CPU cores is 3 (it is impossible to set 1 or 2,because as far as I see minimum CPUs per executor is 1). Of course, for each of the streaming jobs, 1 CPU in the cluster is occupied by the driver program.
First strange situation that I see is that instead of running 3 executors with 1 core each, Mesos launches 2 executors on 2 nodes where one has 2 CPUs, while the other has 1 CPU. There is nothing launched on the 3rd node even though there were enough resources. How to force Mesos to run 3 executors on the cluster?
Also, when I run 1 pipeline with 3 CPUs, I see that those CPUs are blocked, and cannot be reused by other streaming pipeline, even though they are not doing any workload. Why Mesos can not shift available resources between applications? Isn't that the main benefit of using Mesos? Or maybe simply there are not enough resources to be shifted?
EDITED
Also the question is can I assign less than one CPU per Executor?
Kindest regards,
Srdjan

Spark streaming on Mesos - course grained

I have 2 cores on my vagrant development machine, and want to run 2 streaming applications.
If:
both of them take both available cores ( I didn't specify "spark.cores.max")
they have streaming interval of 15 seconds
5 seconds is enough to perform computation
Is expected behaviour of Mesos to shift these 2 available cores between 2 applications? I would expect that behaviour, because "Mesos locks the resources until job is executed", and in Spark Streaming one job is what is executed within batch interval.
Otherwise, If resources are locked for the life of application (in spark streaming it is forever), what is the benefit of using Mesos instead of Standalone cluster manager?
Spark Streaming locks each stream Reader to a core, plus you'll need at least one other core for the rest of the processing. So you can't run two jobs simultaneously on a 2-core machine.
Mesos gives you much better resource utilization in a cluster. Standalone is more static. It might fine, though, for a fixed number of long-running streams, as long as you have enough resources and you use the recommendations for capping the allowed resources each job can grab (default is to grab everything).
If you're really just running on a single machine, use local[*] to avoid the overhead of master and slave daemons, etc.

Why does my Spark only use two computers in the cluster?

I'm using Spark 1.3.1 on StandAlone mode in my cluster which has 7 machines. 2 of the machines are powerful and have 64 cores and 1024 GB memory, while the others have 40 cores and 256 GB memory. One of the powerful machines is set to be the master, and others are set to be the slaves. Each of the slave machine runs 4 workers.
When I'm running my driver program on one of the powerful machines, I see that it takes the cores only from the two powerful machines. Below is a part of the web UI of my spark master.
My configuration of this Spark driver program is as follows:
spark.scheduling.mode=FAIR
spark.default.parallelism=32
spark.cores.max=512
spark.executor.memory=256g
spark.logConf=true
Why spark does this? Is this a good thing or a bad thing? Thanks!
Consider lowering your executors memory from the 256GB that you have defined.
For the future, take in consideration assigning around 75% of available memory.

Resources