How to configure the Yarn cluster with spark? - apache-spark

I have 2 machines with 32gb ram and 8core each machine. So how can i configure the yarn with spark and which properties i have to use to tune the resources according to our dataset. I have 8gb dataset, So can anyone suggest the configuration of yarn with spark in parallel jobs running?
Here is the yarn configuration:
I'm using hadoop 2.7.3,spark 2.2.0 and ubuntu 16
`yarn scheduler minimum-allocation-mb--2048
yarn scheduler maximum-allocation-mb--5120
yarn nodemanager resource.memory-mb--30720
yarn scheduler minimum-allocation-vcores--1
yarn scheduler maximum-allocation-vcores--6
yarn nodemanager resource.cpu-vcores--6`
Here is the spark configuration:
spark master master:7077
spark yarn am memory 4g
spark yarn am cores 4
spark yarn am memoryOverhead 412m
spark executor instances 3
spark executor cores 4
spark executor memory 4g
spark yarn executor memoryOverhead 412m
but my question is with 32gb ram and 8core each machine. how many applications i can run whether this conf is correct? bcoz only two applications running parallely.

Related

Spark config, org.apache.spark.shuffle.FetchFailedException Failed to connect

I installed hadoop 3.1.0 and spark 2.4.7 on 4 virtual machines. In total I have 32 cores, 128G memory. I have been running spark-shell test
[hadoop#hadoop1 bin]$hadoop fs -mkdir -p /user/hadoop/testdata
[hadoop#hadoop1 bin]$hadoop fs -put /app/hadoop/hadoop-2.2.0/etc/hadoop/core-site.xml /user/hadoop/testdata
[hadoop#hadoop1 bin]$ spark-shell --master spark://hadoop1:7077
scala>val rdd=sc.textFile("hdfs://hadoop1:9000/user/hadoop/testdata/core-site.xml")
scala>rdd.cache()
scala>val wordcount=rdd.flatMap(_.split(" ")).map(x=>(x,1)).reduceByKey(_+_)
scala>wordcount.take(10)
scala>val wordsort=wordcount.map(x=>(x._2,x._1)).sortByKey(false).map(x=>(x._2,x._1))
scala>wordsort.take(10)
I have been playing with the following parameters
spark.core.connection.ack.wait.timeout 600s
spark.default.parallelism 4
spark.driver.memory 6g
spark.executor.memory 6g
spark.cores.max 21
spark.executor.cores 3
and bumped into org.apache.spark.shuffle.FetchFailedException Failed to connect 192.168.0.XXX
or WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Is there a general guide to fine-tune these and any other parameters?

How to understand spark-submit script master is YARN?

We have all 6 machine, hdfs and yarn service on all node, 1 master and 6 slaves.
And we install Spark on 3 machine, 1 master, 3 workers ( 1 node master + worker) .
We know when --master spark://[host]:[port], the job will run only 3 node use standalone mode.
And when use spark-submit --master yarn submit a jar, it's would use all 6 server cpu and memory or just use 3 spark worker node machine ?
And if can run all 6 node, How left 3 server can know it's the Spark job?
Spark: 2.3.1
Hadoop: 2.7.3
In yarn mode, spark-submit send resource allocation resource to yarn and the containers will be launched on different node managers based on resource availability.

Spark thrift server use only 2 cores

Google dataproc one node cluster, VCores Total = 8. I've tried from user spark:
/usr/lib/spark/sbin/start-thriftserver.sh --num-executors 2 --executor-cores 4
tried to change /usr/lib/spark/conf/spark-defaults.conf
tried to execute
export SPARK_WORKER_INSTANCES=6
export SPARK_WORKER_CORES=8
before start-thriftserver.sh
No success. In yarn UI I can see that thrift app use only 2 cores and 6 cores available.
UPDATE1:
environment tab at spark ui:
spark.submit.deployMode client
spark.master yarn
spark.dynamicAllocation.minExecutors 6
spark.dynamicAllocation.maxExecutors 10000
spark.executor.cores 4
spark.executor.instances 1
It depends on what yarn mode is that app in.
Can be yarn client - 1 core for Application Master(the app will be running on the machine where you ran command start-thriftserver.sh).
In case of yarn cluster - Driver will be inside AM container, so you can tweak cores with spark.driver.cores. Other cores will be used by executors (1 executor = 1 core by default)
Beware that --num-executors 2 --executor-cores 4 wouldn't work as you have 8 cores max and +1 will be needed for AM container (total of 9)
You can check cores usage from Spark UI - http://sparkhistoryserverip:18080/history/application_1534847473069_0001/executors/
Options below are only for Spark standalone mode:
export SPARK_WORKER_INSTANCES=6
export SPARK_WORKER_CORES=8
Please review all configs here - Spark Configuration (latest)
In your case you can edit spark-defaults.conf and add:
spark.executor.cores 3
spark.executor.instances 2
Or use local[8] mode as you have only one node anyway.
If you want YARN shows you proper number of cores allocated to executors change value in capacity-scheduler.xml for:
yarn.scheduler.capacity.resource-calculator
from:
org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
to:
org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
Otherwise it doesn't matter how many cores you ask for your executors, YARN will show you only one core per container.
Actually this config changes resource allocation behavior. More details: https://hortonworks.com/blog/managing-cpu-resources-in-your-hadoop-yarn-clusters/

Spark and YARN - how to work together with them

I have a conceptual doubt.
It would be about YARN and SPARK, I have a 2 YARN (AM) 28GB with 4 CPUs and a WORKNODE of 56GB with 8 cpus.
I submit my applications always through the YARN yarn-cluster in the spark-submit option.
How do I use all memory and worknode cpus if the YARN server has fewer resources?
Can worknode settings overlap with YARN settings?
If my "spark.executor.memory" parameter is larger than the YARN memory, will it be used or not ?
With using the full potential of my worknode?

Running a simple Spark script on Mesos with Zookeeper

I want to run a simple spark program, but i am restricted by some errors.
My Environment is:
CentOS:6.6
Java: 1.7.0_51
Scala: 2.10.4
Spark: spark-1.4.0-bin-hadoop2.6
Mesos: 0.22.1
All are installed and nodes are up.Now i have one Mesos master and Mesos slave node. My spark properties are below:
spark.app.id 20150624-185838-2885789888-5050-1291-0005
spark.app.name Spark shell
spark.driver.host 192.168.1.172
spark.driver.memory 512m
spark.driver.port 46428
spark.executor.id driver
spark.executor.memory 512m
spark.executor.uri http://192.168.1.172:8080/spark-1.4.0-bin-hadoop2.6.tgz
spark.externalBlockStore.folderName spark-91aafe3b-01a8-4c86-ac3b-999e278807c5
spark.fileserver.uri http://192.168.1.172:51240
spark.jars
spark.master mesos://zk://192.168.1.172:2181/mesos
spark.mesos.coarse true
spark.repl.class.uri http://192.168.1.172:51600
spark.scheduler.mode FIFO
Now when I started spark, it comes to scala prompt(scala>).
After that I am getting following error: mesos task 1 is now TASK_FAILED, blacklisting mesos slave value due to too many failures is Spark installed on it
How to resolve this.
With only 900MB and spark.driver.memory = 512m, you will be able to launch the scheduler/REPL, but you won't have enough memory for spark.executor.memory = 512m, so any tasks will fail. Either increasing your VM memory size or reducing the driver/executor memory requirements will help you get around these memory limits.
Could you check the mesos slave logs/ task information for more output on why the task failed. You could have a look at :5050.
Probably unrelated question: Do you actually have zookeeper:
spark.master mesos://zk://192.168.1.172:2181/mesos
running (as you mentioned you only have one master)?

Resources