Allocated Memory and Reserved Memory of Application Master - apache-spark

I am trying to understand the 'Allocated Memory' and 'Reserve Memory' columns that are present in the screenshot. Screenshot from Application Master in YARN UI
The cluster settings that I have done in YARN are:
yarn_nodemanager_resource_memory-mb: 16GB
yarn_scheduler_minimum-allocation-mb: 256MB
yarn_scheduler_increment-allocation-mb: 500MB
yarn_scheduler_maximum-allocation-mb: 16GB
It is a single node cluster having 32GB of memory in total and 6 vCores.
Now, you can see from the screenshot that the 'Allocated Memory' is 8500MB. I would like to know how this is getting calculated.
One more thing - the driver memory specified is spark.driver.memory=10g

Allocated memory is either determined by:
The memory available in your cluster
The memory available in your queue
vCores * (executor memory + executor overhead)
In your case it looks like your allocated memory is limited by the third option. I'm guessing you didn't set spark.executor.memory or spark.executor.memoryOverhead because the memory you are getting is right in line with the default values. The Spark docs show that default values are:
spark.executor.memory = 1g
spark.executor.memoryOverhead = 0.1 * executor memory with a minimum of 384mb
This gives about 1400mb per core which multiple by your 6 cores lines up with the Allocated Memory you are seeing

Related

How to calculate the Executor memory,No of executor ,No of executor cores and Driver memory to read a file of 40GB using Spark?

Yarn Cluster Configuration:
8 Nodes
8 cores per Node
8 GB RAM per Node
1TB HardDisk per Node
Executor memory & No of Executors
Executor memory and no of executors/node are interlinked so you would first start selecting Executor memory or No of executors then based on your choice you can follow this to set properties to get desired results
In YARN these properties would affect number of containers (/executors in Spark) that can be instantiated in a NodeManager based on spark.executor.cores, spark.executor.memory property values (along with executor memory overhead)
For example, if a cluster with 10 nodes (RAM : 16 GB, cores : 6) and set with following yarn properties
yarn.scheduler.maximum-allocation-mb=10GB
yarn.nodemanager.resource.memory-mb=10GB
yarn.scheduler.maximum-allocation-vcores=4
yarn.nodemanager.resource.cpu-vcores=4
Then with spark properties spark.executor.cores=2, spark.executor.memory=4GB you can expect 2 Executors/Node so total you'll get 19 executors + 1 container for Driver
If the spark properties are spark.executor.cores=3, spark.executor.memory=8GB then you will get 9 Executor (only 1 Executor/Node) + 1 container for Driver link
Driver memory
spark.driver.memory —Maximum size of each Spark driver's Java heap memory
spark.yarn.driver.memoryOverhead —Amount of extra off-heap memory that can be requested from YARN, per driver. This, together with spark.driver.memory, is the total memory that YARN can use to create a JVM for a driver process.
Spark driver memory does not impact performance directly, but it ensures that the Spark jobs run without memory constraints at the driver. Adjust the total amount of memory allocated to a Spark driver by using the following formula, assuming the value of yarn.nodemanager.resource.memory-mb is X:
12 GB when X is greater than 50 GB
4 GB when X is between 12 GB and 50 GB
1 GB when X is between 1GB and 12 GB
256 MB when X is less than 1 GB
These numbers are for the sum of spark.driver.memory and spark.yarn.driver.memoryOverhead . Overhead should be 10-15% of the total.
You can also follow this Cloudera link for tuning Spark jobs

Required executor memory is above the max threshold of this cluster

I am running Spark on an 8 node cluster with yarn as a resource manager. I have 64GB memory per node, and I set the executor memory to 25GB, but I get the error:
Required executor memory (25600MB) is above the max threshold (16500 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
I the set yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb to 25600 but nothing changes.
Executor memory is only the heap portion of the memory. You still have to run a JVM plus allocate the non-heap portion of memory inside a container and have that fit in YARN. Refer to the image from How-to: Tune Your Apache Spark Jobs (Part 2) by Sandy Ryza.
If you want to use executor memory set at 25GB, I suggest you bump up yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb to something higher like 42GB.

Understanding spark.yarn.executor.memoryOverhead

When I am running a spark application on yarn, with driver and executor memory settings as --driver-memory 4G --executor-memory 2G
Then when I run the application, an exceptions throws complaining that Container killed by YARN for exceeding memory limits. 2.5 GB of 2.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
What does this 2.5 GB mean here? (overhead memory, executor memory or overhead+executor memory?)I ask so because when I change the the memory settings as:
--driver-memory 4G --executor-memory 4G --conf --driver-memory 4G --conf spark.yarn.executor.memoryOverhead=2048,then the exception disappears.
I would ask, although I have boosted the overhead memory to 2G, it is still under 2.5G, why does it work now?
Let us understand how memory is divided among various regions in spark.
Executor MemoryOverhead :
spark.yarn.executor.memoryOverhead = max(384 MB, .07 * spark.executor.memory).
In your first case, memoryOverhead = max(384 MB, 0.07 * 2 GB) = max(384 MB, 143.36 MB) Hence, memoryOverhead = 384 MB is reserved in each executer assuming you have assigned single core per executer.
Execution and Storage Memory :
By default spark.memory.fraction = 0.6, which implies that execution and storage as a unified region occupy 60% of the remaining memory i.e. 998 MB. There is no strict boundary that is allocated to each region unless you enable spark.memory.useLegacyMode. Otherwise they share a moving boundary.
User Memory :
Memory pool that remains after the allocation of Execution and Storage Memory, and it is completely up to you to use it in a way you like. You can store your own data structures there that would be used in RDD transformations. For example, you can rewrite Spark aggregation by using mapPartitions transformation maintaining hash table for this aggregation to run. This comprises the rest of 40% memory left after MemoryOverhead. In your case it is ~660 MB.
If any of the above allocations are not met by your job, then it is highly likely to end up in OOM problems.

Spark Yarn Memory configuration

I have a spark application that keeps failing on error:
"Diagnostics: Container [pid=29328,containerID=container_e42_1512395822750_0026_02_000001] is running beyond physical memory limits. Current usage: 1.5 GB of 1.5 GB physical memory used; 2.3 GB of 3.1 GB virtual memory used. Killing container."
I saw lots of different parameters that was suggested to change to increase the physical memory. Can I please have the some explanation for the following parameters?
mapreduce.map.memory.mb (currently set to 0 so suppose to take the default which is 1GB so why we see it as 1.5 GB, changing it also dint effect the number)
mapreduce.reduce.memory.mb (currently set to 0 so suppose to take the default which is 1GB so why we see it as 1.5 GB, changing it also dint effect the number)
mapreduce.map.java.opts/mapreduce.reduce.java.opts set to 80% form the previous number
yarn.scheduler.minimum-allocation-mb=1GB (when changing this then I see the effect on the max physical memory, but for the value 1 GB it still 1.5G)
yarn.app.mapreduce.am.resource.mb/spark.yarn.executor.memoryOverhead can't find at all in configuration.
We are defining YARN (running with yarn-cluster deployment mode) using cloudera CDH 5.12.1.
spark.driver.memory
spark.executor.memory
These control the base amount of memory spark will try to allocate for it's driver and for all the executors. These are probably the ones you want to increase if you are running out of memory.
// options before Spark 2.3.0
spark.yarn.driver.memoryOverhead
spark.yarn.executor.memoryOverhead
// options after Spark 2.3.0
spark.driver.memoryOverhead
spark.executor.memoryOverhead
This value is an additional amount of memory to request when you are running Spark on yarn. It is intended to account extra RAM needed for the yarn container that is hosting your Spark Executors.
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb
When Spark goes to ask Yarn to reserve a block of RAM for an executor, it will ask a value of the base memory plus the overhead memory. However, Yarn may not give it back one of exactly that size. These parameters control the smallest container size and the largest container size that YARN will grant. If you are only using the cluster for one job, I find it easiest to set these to very small and very large values and then using the spark memory settings mentions above to set the true container size.
mapreduce.map.memory.mb
mapreduce.map.memory.mb
mapreduce.map.java.opts/mapreduce.reduce.java.opts
I don't think these have any bearing on your Spark/Yarn job.

The actual executor memory does not match the executoy-memory I set

I hava a spark2.0.1 cluster with 1 Master(slaver1) and 2 worker(slaver2,slaver3),every machine has 2GB RAM.when I run the command
./bin/spark-shell --master spark://slaver1:7077 --executor-memory 500m
when I check the executor memory in the web (slaver1:4040/executors/). I found it is 110MB.
The memory you are talking about is Storage memory Actually Spark Divides the memory [Called Spark Memory] into 2 Region First is Storage Memory and Second is Execution Memory
The Total Memory can Be calculated by this Formula
(“Java Heap” – “Reserved Memory”) * spark.memory.fraction
Just to give you an overview Storage Memory is This pool is used for both storing Apache Spark cached data and for temporary space serialized data “unroll”. Also all the “broadcast” variables are stored there as cached blocks
If you want to check total memory provided you can go to Spark UI Spark-Master-Ip:8080[default port] in the start you can find Section called MEMORY that is total memory used by spark.
Thanks
From Spark 1.6 version, The memory is divided according to the following picture
There is no hard boundary between execution and storage memory. The storage memory is required more then it takes from execution memory and viceversa. The
Execution and storage memory is given by (ExecutorMemory-300Mb)* spark.memory.fraction
In your case (500-300)*).75 = 150mb there will be 3 to 5% error in Executor memory that is allocated.
300Mb is the reserved memory
User memory = (ExecutorMemory-300)*).(1-spark.memory.fraction).
In your case (500-300)*).25 = 50mb
Java Memory : Runtime.getRuntime().maxMemory()

Resources