Spark Streaming : number of executor vs Custom Receiver - apache-spark

Why Spark with one worker node and four executors, each with one core cannot able to process Custom Receiver ??
What are the reason for not processing incoming data via Custom Receiver, if the executor is having a single core in Spark Streaming ?
I am running Spark on Standalone mode. I am getting data in Custom receivers in Spark Streaming app. My laptop is having 4 cores.
master="spark://lappi:7077"
$spark_path/bin/spark-submit --executor-cores 1 --total-executor-cores 4 \
--class "my.class.path.App" \
--master $master

You indicate that your (1) executor should have 1 core reserved for Spark, which means you use 1 of your 4 cores. The parameter total-executor-cores is never limiting since it limits the total amount of cores on your cluster reserved for Spark, which is, per your previous setting, 1.
The Receiver consumes one thread for consuming data out of your one available, which means you have no core left to process data. All of this is explained in the doc:
https://spark.apache.org/docs/latest/streaming-programming-guide.html#input-dstreams-and-receivers
You want to bump that executor-cores parameter to 4.

Related

Is there any relation between vCpu on Fargate resource and spark core/threads?

I'm running a spark batch job on aws fargate in standalone mode. On the compute environment, I have 8 vcpu and job definition has 1 vcpu and 2048 mb memory. In the spark application I can specify how many core I want to use and doing this using below code
sparkSess = SparkSession.builder.master("local[8]")\
.appName("test app")\
.config("spark.debug.maxToStringFields", "1000")\
.config("spark.sql.sources.partitionOverwriteMode", "dynamic")\
.getOrCreate()
local[8] is specifying 8 cores/threads (that’s what I'm assuming).
Initially I was running the spark app without specifying cores and I think job was running in single thread and was taking around 10 min to complete but with this number it is reducing the time to process. I started with 2 it almost reduced to 5 minutes and then I have changed to 4, 8 and now it is taking almost 4 minutes. But I don't understand the relation between vcpu and spark threads. Whatever the number I specify for cores, sparkContext.defaultParallelism shows me that value.
Is this the correct way? Is there any relation between this number and the vcpu that I specify on job definition or compute environment.
You are running in Spark Local Mode. Learning Spark has this to say about Local mode:
Spark driver runs on a single JVM, like a laptop or single node
Spark executor runs on the same JVM as the driver
Cluster manager Runs on the same host
Damji, Jules S.,Wenig, Brooke,Das, Tathagata,Lee, Denny. Learning Spark (p. 30). O'Reilly Media. Kindle Edition.
local[N] launches with N threads. Given the above definition of Local Mode, those N threads must be shared by the Local Mode Driver, Executor and Cluster Manager.
As such, from the available vCPUs, allotting one vCPU for the Driver thread, one for the Cluster Manager, one for the OS and the remaining for Executor seems reasonable.
The optimal number of threads/vCPUs for the Executor will depend on the number of partitions your data has.

How spark driver decides which spark executors are to be used?

How a spark driver program decides, which executors are to be used for a particular job?
Is it Data locality Driven?
Are the executors chosen based on the availability of data on that datanode?
If yes, what happens if all the data is present on a single data node and the data node has just enough resources to run 2 executors, but in the spark-submit command we have used --num-executors 4. Which should run 4 executors?
Will spark driver copy some of the data from that datanode to some other datanode and spawn 2 more executors (out of the 4 required executors)?

Need help in understanding pyspark execution on yarn as Master

I have already some picture of yarn architecture as well as spark architecture.But when I try to understand them together(thats what happens
when apark job runs on YARN as master) on a Hadoop cluster, I am getting in to some confusions.So first I will say my understanding with below example and then I will
come to my confusions
Say I have a file "orderitems" stored on HDFS with some replication factor.
Now I am processing the data by reading this file in to a spark RDD (say , for calculating order revenue).
I have written the code and configured the spark submit as given below
spark-submit \
--master yarn \
--conf spark.ui.port=21888 \
--num-executors 2 \
--executor-memory 512M \
src/main/python/order_revenue.py
Lets assume that I have created the RDD with a partition of 5 and I have executed in yarn-client mode.
Now As per my understanding , once I submit the spark job on YARN,
Request goes to Application manager which is a component of resource
manager.
Application Manager will find one node manager and ask it to launch a
container.
This is the first container of an application and we will call it an
Application Master.
Application master takes over the responsibility of executing and monitoring
the job.
Since I have submitted on client mode,driver program will run on my edge Node/Gateway Node.
I have provided num-executors as 2 and executor memory as 512 mb
Also I have provided no.of partitions for RDD as 5 which means , it will create 5 partitions of data read
and distribute over 5 nodes.
Now here my few confusions over this
I have read in user guide that, partitions of rdd will be distributed to different nodes. Does these nodes are same as the
'Data Nodes' of HDFS cluster? I mean here its 5 partitions, does
this mean its in 5 data nodes?
I have mentioned num-executors as 2.So this 5 partitions of data will utilizes 2 executors(CPU).So my nextquestion is , from where
this 2 executors (CPU) will be picked? I mean 5 partitions are in 5 nodes
right , so does these 2 executors are also in any of these nodes?
The scheduler is responsible for allocating resources to the various running applications subject to constraints of capacities,
queues etc. And also a Container is a Linux Control Group which is
a linux kernel feature that allows users to allocate
CPU,memory,Disk I/O and Bandwidth to a user process. So my final
question is Containers are actually provided by "scheduler"?
I am confused here. I have referred architecture, release document and some videos and got messed up.
Expecting helping hands here.
To answer your questions first:
1) Very simply, Executor is spark's worker node and driver is manager node and have nothing to do with hadoop nodes. Assume executors to be processing units (say 2 here) and repartition(5) divides data in 5 chunks to be by these 2 executors and on some basis these data chunks will be divided amongst 2 executors. Repartition data does not create nodes
Spark cluster architecture:
Spark on yarn client mode:
Spark on yarn cluster mode:
For other details you can read the blog post https://sujithjay.com/2018/07/24/Understanding-Apache-Spark-on-YARN/
and https://0x0fff.com/spark-architecture/

Why Spark utilizing only one core per executor? How it decides to utilize cores other than number of partitions?

I am running spark in HPC environment on slurm using Spark standalone mode spark version 1.6.1. The problem is my slurm node is not fully used in the spark standalone mode. I am using spark-submit in my slurm script. There are 16 cores available on a node and I get all 16 cores per executor as I see on SPARK UI. But only one core per executor is actually utilized. top + 1 command on the worker node, where executor process is running, shows that only one cpu is being used out of 16 cpus. I have 255 partitions, so partitions does not seems a problem here.
$SPARK_HOME/bin/spark-submit \
--class se.uu.farmbio.vs.examples.DockerWithML \
--master spark://$MASTER:7077 \
--executor-memory 120G \
--driver-memory 10G \
When I change script to
$SPARK_HOME/bin/spark-submit \
--class se.uu.farmbio.vs.examples.DockerWithML \
--master local[*] \
--executor-memory 120G \
--driver-memory 10G \
I see 0 cores allocated to executor on Spark UI which is understandable because we are no more using spark standalone cluster mode. But now all the cores are utilized when I check top + 1 command on worker node which hints that problem is not with the application code but with the utilization of resources by spark standalone mode.
So how spark decides to use one core per executor when it has 16 cores and also have enough partitions? What can I change so it can utilize all cores?
I am using spark-on-slurm for launching the jobs.
Spark configurations in both cases are as fallows:
--master spark://MASTER:7077
(spark.app.name,DockerWithML)
(spark.jars,file:/proj/b2015245/bin/spark-vs/vs.examples/target/vs.examples-0.0.1-jar-with-dependencies.jar)
(spark.app.id,app-20170427153813-0000)
(spark.executor.memory,120G)
(spark.executor.id,driver)
(spark.driver.memory,10G)
(spark.history.fs.logDirectory,/proj/b2015245/nobackup/eventLogging/)
(spark.externalBlockStore.folderName,spark-75831ca4-1a8b-4364-839e-b035dcf1428d)
(spark.driver.maxResultSize,2g)
(spark.executorEnv.OE_LICENSE,/scratch/10230979/SureChEMBL/oe_license.txt)
(spark.driver.port,34379)
(spark.submit.deployMode,client)
(spark.driver.host,x.x.x.124)
(spark.master,spark://m124.uppmax.uu.se:7077)
--master local[*]
(spark.app.name,DockerWithML)
(spark.app.id,local-1493296508581)
(spark.externalBlockStore.folderName,spark-4098cf14-abad-4453-89cd-3ce3603872f8)
(spark.jars,file:/proj/b2015245/bin/spark-vs/vs.examples/target/vs.examples-0.0.1-jar-with-dependencies.jar)
(spark.driver.maxResultSize,2g)
(spark.master,local[*])
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.memory,10G)
(spark.driver.host,x.x.x.124)
(spark.history.fs.logDirectory,/proj/b2015245/nobackup/eventLogging/)
(spark.executorEnv.OE_LICENSE,/scratch/10230648/SureChEMBL/oe_license.txt)
(spark.driver.port,36008)
Thanks,
The problem is that you only have one worker node. In spark standalone mode, one executor is being launched per worker instances. To launch multiple logical worker instances in order to launch multiple executors within a physical worker, you need to configure this property:
SPARK_WORKER_INSTANCES
By default, it is set to 1. You can increase it accordingly based on the computation you are doing in your code to utilize the amount of resources you have.
You want your job to be distributed among executors to utilize the resources properly,but what's happening is only one executor is getting launched which can't utilize the number of core and the amount of memory you have. So, you are not getting the flavor of spark distributed computation.
You can set SPARK_WORKER_INSTANCES = 5
And allocate 2 cores per executor; so, 10 cores would be utilized properly.
Like this, you tune the configuration to get optimum performance.
Try setting spark.executor.cores (default value is 1)
According to the Spark documentation :
the number of cores to use on each executor. For YARN and standalone mode only. In standalone mode, setting this parameter allows an application to run multiple executors on the same worker, provided that there are enough cores on that worker. Otherwise, only one executor per application will run on each worker.
See https://spark.apache.org/docs/latest/configuration.html
In spark cluster mode you should use command --num-executor "numb_tot_cores*num_of_nodes". For example if you have 3 nodes with 8 cores per node you should write --num-executors 24

How to release seemingly inactive executors from a long-running PySpark framework?

Here's my problem. Let's say I have a long-running PySpark framework. It has thousands of tasks that can all be executed in parallel. I get allocated 1,000 cores at the beginning on many different hosts. Each task needs one core. Then, when those finish, the host holds onto one core and has no active tasks. Since there are a large number of hosts, what can happen is that a larger and larger percentage of my cores are allocated to executors that don't have any active tasks. So I can have 1000 cores allocated, but only 100 active tasks. The other 900 cores are allocated to executors that have no active tasks. How can I improve this? Is there a way to shut down executors that aren't doing anything? I am currently using PySpark 1.2, so it'd be great for the functionality to be in that version, but would be happy to hear about solutions (or better solutions) in newer versions. Thanks!
If you do not specify the number of executors that Spark should use, Spark allocates executors as long as Spark has at least 1 task pending in its queue. You can set an upper limit to the number of executors that Spark can dynamically allocate by using this parameter: spark.dynamicAllocation.maxExecutors.
In other word, when launching spark, use:
pyspark --master yarn-client --conf spark.dynamicAllocation.maxExecutors=1000
instead of
pyspark --master yarn-client --num-executors=1000
By default, Spark will release executors after 60s of non-activity.
Note, if you .persist() your Spark.DataFrame, make sure to .unpersist() them otherwise Spark will not release the executors.

Resources