Multiple executors for spark applcaition - apache-spark

Can one worker have multiple executors for the same Spark application in standalone and yarn mode? If no, then what is the reason for that (for both standalone and yarn mode).

Yes, you can specify resources which Spark will use. For example, you can use these properties for configuration:
--num-executors 3
--driver-memory 4g
--executor-memory 2g
--executor-cores 2
If your node has enough resources cluster assigns more than one executors to the same node.
You can read more information about Spark resources configuration here.

Related

Spark Cluster resources ensure system memory/cpu

New to the spark world! I am trying to have a cluster that can be called upon to process applications of various sizes. Using spark-submit to summit and yarn to schedule and handle resource management.
The following is how I am submitting the applications and I believe this is saying for this application I am requesting 3 executors with 4g memory and 5 cores each. Is it correct that this is per application?
spark-submit --master yarn --driver-memory 4G --executor-memory 4G --driver-cores 5 --num-executors 3 --executor-cores 5 some.py
How do I ensure that yarn leaves enough memory and cores for GC, yarn(nodemanger,....) and that system has resources? Using yarn top I have seen there being 0 cores available and 0 gb of memory available. There must be settings for this. Isn't there?
To summarize:
Are core and memory requests on a spark-submit for an individual application run?
Is there any config to ensure the yarn and system has resources. Feel like I need to reserve memory and core for this.
TIA

How we can limit the usages of VCores during Spark-submit

I am writing a Spark structured streaming application in which data processed with Spark, needs be sink'ed to s3 bucket.
This is my development environment.
Hadoop 2.6.0-cdh5.16.1
Spark version 2.3.0.cloudera4
I want to limit the usages of VCores
As of now I have used spark2-submit to specify option as --conf spark.cores.max=4. However after submitting job I observed that job occupied maximum available VCores from cluster(my cluster has 12 VCores)
Next job is not getting started because of unavailability of VCores.
Which is the best way to limit the usages of VCores per job?
As of now I am doing some workaround as : I created Resource Pool in cluster and assigned some resource as
Min Resources : 4 Virtual Cores and 8 GB memory
Used these pool to assign a spark job to limit the usages of VCores.
e.g. spark2-submit --class org.apache.spark.SparkProgram.rt_app --master yarn --deploy-mode cluster --queue rt_pool_r1 /usr/local/abc/rt_app_2.11-1.0.jar
I want to limit the usages of VCores without any workaround.
I also tried with
spark2-shell --num-executors 1 --executor-cores 1 --jars /tmp/elasticsearch-hadoop-7.1.1.jar
and below is observation.
You can use the "--executor-cores" option it will assign the number of core to each of your executor.
can refer 1 and 2

Spark 2 on YARN is utilizing more cluster resource automatically

I am on CDH 5.7.0 and I could see a strange issue with spark 2 running on YARN cluster. Hereunder is my job submit command
spark2-submit --master yarn --deploy-mode cluster --conf "spark.executor.instances=8" --conf "spark.executor.cores=4" --conf "spark.executor.memory=8g" --conf "spark.driver.cores=4" --conf "spark.driver.memory=8g" --class com.learning.Trigger learning-1.0.jar
Even though I have limited the number of cluster resources my job can use, I could see the resource utilization is more than the allocated amount.
The job starts with basic memory consumption like 8G of memory and would eat us the whole cluster.
I do not have dynamic allocation set to true.
I am just triggering an INSERT OVERWRITE query on top of SparkSession.
Any pointers would be very helpful.
I created Resource Pool in cluster and assigned some resource as
Min Resources : 4 Virtual Cores and 8 GB memory
Used these pool to assign a spark job to limit the usages of resource (VCores and memory).
e.g. spark2-submit --class org.apache.spark.SparkProgram.rt_app --master yarn --deploy-mode cluster --queue rt_pool_r1 /usr/local/abc/rt_app_2.11-1.0.jar
If anyone has better options to archive the same please let us know.

spark-submit executor-memory issue on Amazon EMR 5.0

I launch a Python Spark program like this:
/usr/lib/spark/bin/spark-submit \
--master yarn \
--executor-memory 2g \
--driver-memory 2g \
--num-executors 2 --executor-cores 4 \
my_spark_program.py
I get the error:
Required executor memory (2048+4096 MB) is above the max threshold
(5760 MB) of this cluster! Please check the values of
'yarn.scheduler.maximum-allocation-mb' and/or
'yarn.nodemanager.resource.memory-mb'.
This is a brand new EMR 5 cluster with one master m3.2xlarge systems and two core m3.xlarge systems. Everything should be set to defaults. I am currently the only user running only one job on this cluster.
If I lower executor-memory from 2g to 1500m, it works. This seems awfully low. An EC2 m3.xlarge server has 15GB of RAM. These are Spark worker/executor machines, they have no other purpose, so I would like to use as much of that as possible for Spark.
Can someone explain how I go from having an EC2 worker instance with 15GB to being able to assign a Spark worker only 1.5GB?
On [http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/TaskConfiguration_H2.html] I see that the EC2 m3.xlarge default for yarn.nodemanager.resource.memory-mb default to 11520MB and 5760MB with HBase installed. I'm not using HBase, but I believe it is installed on my cluster. Would removing HBase free up lots of memory? Is that yarn.nodemanager.resource.memory-mbsetting the most relevant setting for available memory?
When I tell spark-submit --executor-memory is that per core or for the whole worker?
When I get the error Required executor memory (2048+4096 MB), the first value (2048) is what I pass to --executor-memory and I can change it and see the error message change accordingly. What is the second 4096MB value? How can I change that? Should I change that?
I tried to post this issue to AWS developer forum (https://forums.aws.amazon.com/forum.jspa?forumID=52) and I get the error "Your message quota has been reached. Please try again later." when I haven't even posted anything? Why would I not have permissions to post a question there?
Yes, if hbase is installed, it will use quite a bit of memory be default. You should not put it on your cluster unless you need it.
Your error would make sense if there was only 1 core node. 6G (4G for the 2 executors, 2G for the driver) would be more memory than your resource manager would have to allocate. With a 2 node core, you should actually be able to allocate 3 2G executors. 1 on the node with the driver, 2 on the other.
In general, this sheet could help make sure you get the most out of your cluster.

Apache Spark: setting executor instances does not change the executors

I have an Apache Spark application running on a YARN cluster (spark has 3 nodes on this cluster) on cluster mode.
When the application is running the Spark-UI shows that 2 executors (each running on a different node) and the driver are running on the third node.
I want the application to use more executors so I tried adding the argument --num-executors to Spark-submit and set it to 6.
spark-submit --driver-memory 3G --num-executors 6 --class main.Application --executor-memory 11G --master yarn-cluster myJar.jar <arg1> <arg2> <arg3> ...
However, the number of executors remains 2.
On spark UI I can see that the parameter spark.executor.instances is 6, just as I intended, and somehow there are still only 2 executors.
I even tried setting this parameter from the code
sparkConf.set("spark.executor.instances", "6")
Again, I can see that the parameter was set to 6, but still there are only 2 executors.
Does anyone know why I couldn't increase the number of my executors?
yarn.nodemanager.resource.memory-mb is 12g in yarn-site.xml
Increase yarn.nodemanager.resource.memory-mb in yarn-site.xml
With 12g per node you can only launch driver(3g) and 2 executors(11g).
Node1 - driver 3g (+7% overhead)
Node2 - executor1 11g (+7% overhead)
Node3 - executor2 11g (+7% overhead)
now you are requesting for executor3 of 11g and no node has 11g memory available.
for 7% overhead refer spark.yarn.executor.memoryOverhead and spark.yarn.driver.memoryOverhead in https://spark.apache.org/docs/1.2.0/running-on-yarn.html
Note that yarn.nodemanager.resource.memory-mb is total memory that a single NodeManager can allocate across all containers on one node.
In your case, since yarn.nodemanager.resource.memory-mb = 12G, if you add up the memory allocated to all YARN containers on any single node, it cannot exceed 12G.
You have requested 11G (-executor-memory 11G) for each Spark Executor container. Though 11G is less than 12G, this still won't work. Why ?
Because you have to account for spark.yarn.executor.memoryOverhead, which is min(executorMemory * 0.10, 384) (by default, unless you override it).
So, following math must hold true:
spark.executor.memory + spark.yarn.executor.memoryOverhead <= yarn.nodemanager.resource.memory-mb
See: https://spark.apache.org/docs/latest/running-on-yarn.html for latest documentation on spark.yarn.executor.memoryOverhead
Moreover, spark.executor.instances is merely a request. Spark ApplicationMaster for your application will make a request to YARN ResourceManager for number of containers = spark.executor.instances. Request will be granted by ResourceManager on NodeManager node based on:
Resource availability on the node. YARN scheduling has its own nuances - this is a good primer on how YARN FairScheduler works.
Whether yarn.nodemanager.resource.memory-mb threshold has not been exceeded on the node:
(number of spark containers running on the node * (spark.executor.memory + spark.yarn.executor.memoryOverhead)) <= yarn.nodemanager.resource.memory-mb*
If the request is not granted, request will be queued and granted when above conditions are met.
To utilize the spark cluster to its full capacity you need to set values for --num-executors, --executor-cores and --executor-memory as per your cluster:
--num-executors command-line flag or spark.executor.instances configuration property controls the number of executors requested ;
--executor-cores command-line flag or spark.executor.cores configuration property controls the number of concurrent tasks an executor can run ;
--executor-memory command-line flag or spark.executor.memory configuration property controls the heap size.
You only have 3 nodes in the cluster, and one will be used as the driver, you have only 2 nodes left, how can you create 6 executors?
I think you confused --num-executors with --executor-cores.
To increase concurrency, you need more cores, you want to utilize all the CPUs in your cluster.

Resources