I'm running a spark job on a Google Dataproc cluster (3 nodes n1-highmem-4 so 4 cores and 26GB each, same type for the master).
I have a few questions about informations displayed on the Hadoop and the spark UI:
When I check the Hadoop UI I get this:
My question here is : my total RAM is supposed to be 84 (3x26) so why only 60Gb displayed here ? Is 24GB used for something else ?
2)
This is the screen showing currently launched executors.
My questions are:
Why only 10 cores are used ? Shouldn't we be able to launch a 6th executor using the 2 remaining cores since we have 12, and 2 seem be used per executor ?
Why 2 cores per executor ? Does it change anything if we run 12 executor with 1 core each instead ?
What is the "Input" column ? The total volume each executor received to analyze ?
3)
This is a screenshot of the "Storage" panel. I see the dataframe I'm working on.
I don't understand the "size in memory" column. Is it the total RAM used to cache the dataframe ? It seems very low compared to the size of row files I load into the Dataframe ( 500GB+ ). Is it a wrong interpretation ?
Thanks to anyone who will read this !
If you can take a look at this answer, it mostly answers your question 1 and 2.
To sum up, the total memory is less because some memory are reserved to run OS and system daemons or Hadoop daemons itself, e.g.Namenode, NodeManager.
Similar to cores, in your case it would be 3 nodes and each node runs 2 executors and each executor uses up 2 cores, except for the application master. For the node that application master lives in, there will be only one executor and the cores left are given to master. That's why you see only 5 executor and 10 cores.
For your 3rd question, that number should be the memory used up by the partitions in that RDD, which is approximately equal to memory allocated to each executor in your case it's ~13G.
Note that Spark doesn't load your 500G data at once instead it loads in data in partitions, the number of concurrently loaded partitions depend on the number of cores you have available.
Related
Say I have a total of 4 cores,
What happens if I define num of executors as 8..
Can we share a single core among 2 executors ?
Can the num of cores for a executor be a fraction ?
What is the impact on performance with this kind of config.
This is what I observed in spark standalone mode:
The total cores of my system are 4
if I execute spark-shell command with spark.executor.cores=2
Then 2 executors will be created with 2 core each.
But if I configure the no of executors more than available cores,
Then only one executor will be created, with the max core of the system.
The number of the core will never be of fraction value.
If you assign fraction value in the configuration, you will end up with exception:
Feel free to edit/correct the post if anything is wrong.
I have a cluster with 4 nodes (each with 16 cores) using Spark 1.0.1.
I have an RDD which I've repartitioned so it has 200 partitions (hoping to increase the parallelism).
When I do a transformation (such as filter) on this RDD, I can't seem to get more than 64 tasks (my total number of cores across the 4 nodes) going at one point in time. By tasks, I mean the number of tasks that appear under the Application Spark UI. I tried explicitly setting the spark.default.parallelism to 128 (hoping I would get 128 tasks concurrently running) and verified this in the Application UI for the running application but this had no effect. Perhaps, this is ignored for a 'filter' and the default is the total number of cores available.
I'm fairly new with Spark so maybe I'm just missing or misunderstanding something fundamental. Any help would be appreciated.
This is correct behavior. Each "core" can execute exactly one task at a time, with each task corresponding to a partition. If your cluster only has 64 cores, you can only run at most 64 tasks at once.
You could run multiple workers per node to get more executors. That would give you more cores in the cluster. But however many cores you have, each core will run only one task at a time.
you can see the more details on the following thread
How does Spark paralellize slices to tasks/executors/workers?
TL;DR
Spark UI shows different number of cores and memory than what I'm asking it when using spark-submit
more details:
I'm running Spark 1.6 in standalone mode.
When I run spark-submit I pass it 1 executor instance with 1 core for the executor and also 1 core for the driver.
What I would expect to happen is that my application will be ran with 2 cores total.
When I check the environment tab on the UI I see that it received the correct parameters I gave it, however it still seems like its using a different number of cores. You can see it here:
This is my spark-defaults.conf that I'm using:
spark.executor.memory 5g
spark.executor.cores 1
spark.executor.instances 1
spark.driver.cores 1
Checking the environment tab on the Spark UI shows that these are indeed the received parameters but the UI still shows something else
Does anyone have any idea on what might cause Spark to use different number of cores than what I want I pass it? I obviously tried googling it but didn't find anything useful on that topic
Thanks in advance
TL;DR
Use spark.cores.max instead to define the total number of cores available, and thus limit the number of executors.
In standalone mode, a greedy strategy is used and as many executors will be created as there are cores and memory available on your worker.
In your case, you specified 1 core and 5GB of memory per executor.
The following will be calculated by Spark :
As there are 8 cores available, it will try to create 8 executors.
However, as there is only 30GB of memory available, it can only create 6 executors : each executor will have 5GB of memory, which adds up to 30GB.
Therefore, 6 executors will be created, and a total of 6 cores will be used with 30GB of memory.
Spark basically fulfilled what you asked for. In order to achieve what you want, you can make use of the spark.cores.max option documented here and specify the exact number of cores you need.
A few side-notes :
spark.executor.instances is a YARN-only configuration
spark.driver.memory defaults to 1 core already
I am also working on easing the notion of the number of executors in standalone mode, this might get integrated into a next release of Spark and hopefully help figuring out exactly the number of executors you are going to have, without having to calculate it on the go.
I have 8 executors with 4 core each , I repartition a rdd to 32. I expects all 8 executors play a part on on the next action that i call on the repartitioned data. But seems like sometime 3 executors participate and sometimes 4 but not more than that.
How can i ensure that the data gets divided on all executors?
rdd.repartition(32).foreachPartition{ part =>
updateMem(part)
}
The last part calls inser/update into memsql.
Below answer is valid only if you are using AWS- EMR.
I dont think it is correct to say that you have 8 executors of 4 cores each. here is the explanation. Say, I am using m3.2xlarge machine (EMR).
each machine contains 30 GB memory(total) 8 vcores
There is no way that you can use all the 30 GB memory for executors
as machine would need some memory for its own use.
You would like to leave enough memory for machine own use (like OS
and other usage) so that there will not be any system failure.
say, you want to leave 10GB memory for machines then you are left
with 20GB memory
In 20 GB memory I can have 6 executors (3GB each, 6*3GB =18GB) , can
have 4 executors (5GB each, 4*5GB = 20GB) etc
so, you can decide the number of executors depending upon your need
on memory for each executor.
To be specific to your use case, look into your total memory available in each machine and the spark-conf(/etc/spark/conf/spark-defaults.conf) for these two parameters and adjust accordingly.
spark.executor.memory
spark.executor.cores
I have a small cluster of 3 nodes with 12 total cores and 44 GB of memory. I am reading a small text file from hdfs (5 mb ) and running kmeans algorithm on it. I set the number of executors to 3 and partitioned my text file into three partitions. The application UI shows that only one of the executors is running all the tasks.
Here is the screenshot of the application GUIenter image description here
And here is the Jobs UI:
enter image description here
Can somebody help me figure out why my tasks are all running in one executor while others are idle? Thanks.
try to re-partition your file into 12 partitions. If you have 3 partitions and each node has 4 cores it's enough no run all tasks on 1 node. Spark roughly splits the work as 1 partition on 1 core.