Run spark driver on separate machine - apache-spark

Currently I am using Spark 2.0.0 in cluster mode (Standalone cluster) with the following cluster config:
Workers: 4
Cores in use: 32 Total, 32 Used
Memory in use: 54.7 GB Total, 42.0 GB Used
I have 4 slaves (workers), and 1 master machine. There are 3 main parts to a Spark cluster - Master, Driver, Workers (ref)
Now my problem is that driver is starting up in one of the worker nodes, which is blocking me in using worker nodes in their full capacity (RAM wise). For example, if I run my spark job with 2g memory for driver, then I am left with only ~13gb memory in each machine for executor memory (assuming total RAM in each machine is 15gb). Now I think there can be 2 ways to fix this:
1) Run driver on master machine, this way I can specify full 15gb RAM as executor memory
2) Specify driver machine explicitly (one of the worker nodes), and assign memory to both driver and executor for this machine accordingly. For rest of the worker nodes I can specify max executor memory.
How do I achieve point 1 or 2? Or it is even possible?
Any pointers to it are appreciated.

To run the driver on the master, run spark-submit from the master and specify --deploy-mode client. Launching applications with spark-submit.
It is not possible to specify which worker the driver will run on when using --deploy-mode cluster. However you can run the driver on a worker and achieve maximum cluster utilisation if you use a cluster manager such as yarn or mesos.

Related

emr spark master node runs out of memory in yarn cluster mode

I am new to EMR and I am running an EMR cluster, with 1 master (32gb) and 5 core nodes (16gb). I launch 11 apps. The apps have to be separated in case one of them fail (all of them are streaming apps). I must mention that I also got ElasticSearch running on the cluster.
After some time the master node is running out of memory and stops responding and some apps starting to fail. In the process overview I found many smaller hadoop processes with that occupy 1-1.3GB of RAM. I guess these are the driver processes from each app. I tried to reduce the the driver memory under "spark.driver.memory" to 512MB, but it's still at 1.3GB after relaunching the apps. Is this because of yarn?
ES just allocates ca. 6.5 GB of RAM of the master node
I had to specify the driver memory in spark-submit command like this:
spark-submit --driver-memory 500M
because to specify it inside the python file is too late, when you run the driver in client mode, because it allocates the memory before

Spark Driver does not have any worker allotted

I am learning spark and trying to execute simple wordcount application. I am using
spark version 2.4.7-bin-hadoop.2.7
scala 2.12
java 8
spark cluster having 1 master and 2 worker node is running as stand alone cluster
spark config is
spark.master spark://localhost:7077
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.driver.memory 500M
master start script is ${SPARK_HOME}/sbin/start-master.sh
slave start script is ${SPARK_HOME}/sbin/start-slave.sh spark://localhost:7077 -c 1 -m 50M
I want to start the driver in cluster mode
${SPARK_HOME}/bin/spark-submit --master spark://localhost:7077 --deploy-mode cluster --driver-memory 500M --driver-cores 8 --executor-memory 50M --executor-cores 4 <absolut path to the jar file having code>
Note: The completed driver/apps are the ones I had to kill
I have used the above params after reading spark doc and checking the blogs.
But after I submit the job driver does not run. It always shows worker as none. I have read multiple blogs and checked the documentation to find out how to submit the job in cluster mode. I tweaked different params for spark-submit but it does not execute. Interesting thing to note is that when i submit in client mode it works.
Can you help me in fixing this issue?
Take a look at CPU and memory configurations of your workers and the driver.
Your application requires 500 Mb of RAM and one CPU core to run the driver and 50 Mb and one core to run computational jobs. So you need 550 Mb of RAM and two cores. These resources are provided by a worker when you run your driver in cluster mode. But each worker is allowed to use only one CPU core and 50 Mb of RAM. So the resources that the worker has are not enough to execute your driver.
You have to allocate your Spark cluster as much resources as you need for your work:
Worker Cores >= Driver Cores + Executor Cores
Worker Memory >= Driver Memory + Executor Memory
Perhaps you have to increase amount of memory for both the driver and the executor. Try to run Worker with 1 Gb memory and your driver with 512 Mb --driver-memory and --executor-memory.

what is the relationship between spark executor and yarn container when using spark on yarn

what is the relationship between spark executor and yarn container when using spark on yarn?
For example, when I set executor-memory = 20G and yarn container memory = 10G, does 1 executor contains 2 containers?
Spark Executor Runs within a Yarn Container. A Yarn Container is provided by Resource Manager on demand. A Yarn container can have 1 or more Spark Executors.
Spark-Executors are the one which runs the Tasks.
Spark Executor will be started on a Worker Node(DataNode)
In your case when you set executor-memory = 20G -> This means you are asking for a Container of size 20GB in which your Executors will be running. Now you might have 1 or more Executors using this 20GB of Memory and this is Per Worker Node.
So for example if u have a Cluster to 8 nodes, it will be 8 * 20 GB of Total Memory for your Job.
Below are the 3 config options available in yarn-site.xml with which you can play around and see the differences.
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
yarn.nodemanager.resource.memory-mb
When running Spark on YARN, each Spark executor runs as a YARN container, This means the number of containers will always be the same as the executors created by a Spark application e.g. via --num-executors parameter in spark-submit.
https://stackoverflow.com/a/38348175/9605741
In YARN mode, each executor runs in one container. The number of executors is the same as the number of containers allocated from YARN(except in cluster mode, which will allocate another container to run the driver).

Forcing driver to run on specific slave in spark standalone cluster running with "--deploy-mode cluster"

I am running a small spark cluster, with two EC2 instances (m4.xlarge).
So far I have been running the spark master on one node, and a single spark slave (4 cores, 16g memory) on the other, then deploying my spark (streaming) app in client deploy-mode on the master. Summary of settings is:
--executor-memory 16g
--executor-cores 4
--driver-memory 8g
--driver-cores 2
--deploy-mode client
This results in a single executor on my single slave running with 4 cores and 16Gb memory. The driver runs "outside" of the cluster on the master-node (i.e. it is not allocated its resources by the master).
Ideally I'd like to use cluster deploy-mode so that I can take advantage of the supervise option. I have started a second slave on the master node giving it 2 cores and 8g memory (smaller allocated resources so as to leave space for the master daemon).
When I run my spark job in cluster deploy-mode (using the same settings as above but with --deploy-mode cluster). Around 50% of the time I get the desired deployment which is that the driver runs through the slave running on the master node (which has the right resources of 2 cores & 8Gb) which leaves the original slave node free to allocate an executor of 4 cores & 16Gb. However the other 50% of the time the master runs the driver on the non-master slave node, which means I get an driver on that node with 2 cores & 8Gb memory, which then leaves no node with sufficient resources to start an executor (which requires 4 cores & 16Gb).
Is there any way to force the spark master to use a specific worker / slave for my driver? Given spark knows that there are two slave nodes, one with 2 cores and the other with 4 cores, and that my driver needs 2 cores, and my executor needs 4 cores it would ideally work out the right optimal placement, but this doesn't seem to be the case.
Any ideas / suggestions gratefully received!
Thanks!
I can see that this is an old question, but let me answer it still, someone might find it useful.
Add --driver-java-options="-Dspark.driver.host=<HOST>" option to spark-submit script, when submitting application, and Spark should deploy driver to specified host.

Each application submitted by client can launch how many YARN container in each Node Manager?

A container is an abstract notion in YARN. When running Spark on YARN, each Spark executor runs as a YARN container. How many YARN containers can be launched in each Node Manager, by each client-submitted application?
You can run as many executors on a single NodeManager as you want, so long as you have the resources. If you have a server with 20gb RAM and 10 cores, you can run 10 2gb 1core executors on that nodemanager. It wouldn't be advisable to run multiple executors on the same nodemanager as there is overhead cost in shuffling data between executors, even if they process is running on the same machine.
Each executor runs in a YARN container.
Depending on how big your YARN cluster is, how your data is spread out among the worker nodes to have better data locality, how many executors you requested for your application, how much resource(cores per executor, memory per executor) you requested per executor and whether your have enabled dynamic resource allocation, Spark decides on how many executors are needed in total and, how many executors to launch per worker nodes.
If you requested for resource that YARN cluster could not accommodate, your requested will be rejected.
Following are the properties to look out for when making spark-submit request.
--num-executors - number of total executors you need
--executor-cores - number of cores per executor. Max 5 is recommended.
--executor-memory - amount of memory per executor.
--spark.dynamicAllocation.enabled
-- spark.dynamicAllocation.maxExecutors

Resources