I have been wanting to find a good way to profile a spark application's executor when its run from a jupyter notebook interface. I basically want to see details like what is the heap memory usage, young and perm gen memory usage etc through time for a particular executor(ones that fail atleast).
I see many solutions out there but nothing that seems mature and easy to install/use.
Are there any good tools that let me do this easily?
Related
When I run the spark driver, the machine's memory grows so much that it cannot run, has anyone encountered this problem?
Use the MAT tool to see what the problem is, but still have no idea.
This question already has answers here:
How to set Apache Spark Executor memory
(13 answers)
Closed 3 years ago.
I am new to spark framework and i would like to know what is driver memory and executor memory? what is the effective way to get the maximum performance from both of them?
Spark need a driver to handle the executors. So the best way to understand is:
Driver
The one responsible to handle the main logic of your code, get resources with yarn, handle the allocation and handle some small amount of data for some type of logic. The Driver Memory is all related to how much data you will retrieve to the master to handle some logic. If you retrieve too much data with a rdd.collect() your driver will run out of memory. The memory for the driver usually is small 2Gb to 4Gb is more than enough if you don't send too much data to it.
Worker
Here is where the magic happens, the worker will be the one responsible to execute your job. The amount of memory depends of what you are going to do. If you just going to do a map function where you just going to transform the data with no type of aggregation, you usually don't need much memory. But if you are going to run big aggregations, a lot of steps and etc. Usually you will use a good amount of memory. And it is related to the size of your files that you will read.
Tell you a proper amount of memory for each case all depends of how your job will work. You need to understand what is the impact of each function and monitor to tune your memory usage for each job. Maybe 2Gb per worker is what you need, but sometimes 8Gb per workers is what you need.
I am trying to join two large spark dataframes and keep running into this error:
Container killed by YARN for exceeding memory limits. 24 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
This seems like a common issue among spark users, but I can't seem to find any solid descriptions of what spark.yarn.executor.memoryOverheard is. In some cases it sounds like it's a kind of memory buffer before YARN kills the container (e.g. 10GB was requested, but YARN won't kill the container until it uses 10.2GB). In other cases it sounds like it's being used to to do some kind of data accounting tasks that are completely separate from the analysis that I want to perform. My questions are:
What is the spark.yarn.executor.memoryOverhead being using for?
What is the benefit of increasing this kind of memory instead of
executor memory (or the number of executors)?
In general, are there things steps I can take to reduce my
spark.yarn.executor.memoryOverhead usage (e.g. particular
datastructures, limiting the width of the dataframes, using fewer executors with more memory, etc)?
Overhead options are nicely explained in the configuration document:
This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).
This also includes user objects if you use one of the non-JVM guest languages (Python, R, etc...).
I was watching a video on apache spark here . Where the speaker Paco Nathan says the following
"If you have 128 GB of RAM, you are not going to throw them all at once at the jvm.That will just cause a lot of garbage collection. And so one of the things with spark is, use more sophisticated ways to leverage the memory space, do more off-heap."
I am not able to understand what he says with regard to how spark efficiently handles this scenario.
also more specifically i completely did not understand the statement
"If you have 128 GB of RAM you are not going to throw them all at once at the jvm.That will just cause of lot of garbage collection"
Can someone explain what the reasoning actually is behind these statements ?
"If you have 128 GB of RAM you are not going to throw them all at once
at the jvm.That will just cause of lot of garbage collection"
This means that you will not assign all the memory to the JVM only when there is memory requirement for other stuff like garbage collection, off-heap operations, etc.
Spark does this by assigning fractions of the memory(that you have assigned to Spark executors) for such operations as shown in image below(for Spark 1.5.0):
Running spark job with 1 TB data with following configuration :
33G executor memory
40 executors
5 cores per executor
17 g memoryoverhead
What are the possible reasons for this Error?
Where did you get that warning from? Which particular logs? Your lucky you even get a warning :). Indeed 17g seems like enough, but then you do have 1TB of data. I've had to use more like 30g for less data than that.
The reason for the error is that yarn uses extra memory for the container that doesn't live in the memory space of the executor. I've noticed that more tasks (partitions) means much more memory used, and shuffles are generally heavier, other than that I've not seen any other correspondences to what I do. Something somehow is eating memory unnecessarily.
It seems the world is moving to Mesos, maybe it doesn't have this problem. Even better, just use Spark stand alone.
More info: http://www.wdong.org/wordpress/blog/2015/01/08/spark-on-yarn-where-have-all-my-memory-gone/. This link seems kinda dead (it's a deep dive into the way YARN gobbles memory). This link may work: http://m.blog.csdn.net/article/details?id=50387104. If not try googling "spark on yarn where have all my memory gone"
One possible issue is that your virtual memory is getting very large in proportion to your physical memory. You may want to set yarn.nodemanager.vmem-check-enabled to false in yarn-site.xml to see what happens. If the error stops, then that may be the issue.
I answered a similar question elsewhere and provided more information there: https://stackoverflow.com/a/42091255/3505110