I am running a pipeline to process my data on Spark. It seems like my Executors die every now and then when they reach near the Storage Memory limit. The job continues and eventually finishes but is this the normal behaviour? Is there something I should be doing to prevent this from happening? Every time this happens the job hangs for some time until (and I am guessing here) YARN provides some new executors for the job to continue.
I think this turned out to be related with a Yarn bug. It doesn't happen anymore after I set the following YARN options like suggested in section 4. of this blog post
Best practice 5: Always set the virtual and physical memory check flag
to false.
"yarn.nodemanager.vmem-check-enabled":"false",
"yarn.nodemanager.pmem-check-enabled":"false"
Related
We are facing a rather unexplainable behaviour in Spark.
Some facts:
The spark streaming is running for hours without any issues.
All of a sudden, a particular section of the code starts to take longer
(data size has not increased) When we look into the execution, we
noticed that the delay is due to a few executors where the processing
takes multipleĀ folds longer than on all the others (the data per task
is the same, with no GC increase according to Spark UI)
See the logs below. If we compare a 'normal' executor log with a 'stuck' executor
log we can see that two log lines take a minute longer than on a
normal one
A restart usually solves the issue for some hours, and then it starts occuring again
Version PySpark 2.4.4.Spark Streaming.
We are really lost, and can't figure out what's going on. Does anyone have any suggestions?
Log example:
'Normal'
'stuck':
I am executing a Spark job in Databricks cluster. I am triggering the job via a Azure Data Factory pipeline and it execute at 15 minute interval so after the successful execution of three or four times it is getting failed and throwing with the exception "java.lang.OutOfMemoryError: GC overhead limit exceeded".
Though there are many answer with for the above said question but in most of the cases their jobs are not running but in my cases it is getting failed after successful execution of some previous jobs.
My data size is less than 20 MB only.
My cluster configuration is:
So the my question is what changes I should make in the server configuration. If the issue is coming from my code then why it is getting succeeded most of the time. Please advise and suggest me the solution.
This is most probably related to executor memory being bit low .Not sure what is current setting and if its default what is the default value in this particular databrics distribution. Even though it passes but there would lot of GCs happening because of low memory hence it would keep failing once in a while . Under spark configuration please provide spark.executor.memory and also some other params related to num of executors and cores per executor . In spark-submit the config would be provided as spark-submit --conf spark.executor.memory=1g
You may try increasing memory of driver node.
Sometimes the Garbage Collector is not releasing all the loaded objects in the driver's memory.
What you can try is to force the GC to do that. You can do that by executing the following:
spark.catalog.clearCache()
for (id, rdd) in spark.sparkContext._jsc.getPersistentRDDs().items():
rdd.unpersist()
print("Unpersisted {} rdd".format(id))
We are consistently observing this behavior with interactive spark jobs in spark-shell or running Sparklyr in RStudio etc.
Say I launched spark-shell in yarn-client mode and performed an action, which triggered several stages in a job and consumed x cores and y MB memory. Once this job finishes, and the corresponding spark session is still active, the allocated cores & memory is not released even though that job is finished.
Is this normal behavior?
Until the corresponding spark session is finished, the ip:8088/ws/v1/cluster/apps/application_1536663543320_0040/
kept showing:
y
x
z
I would assume, Yarn would dynamically allocate these unused resources to other spark jobs which are awaiting resources.
Please clarify if I am missing something here.
You need to play with configs around dynamic allocation https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation -
Set spark.dynamicAllocation.executorIdleTimeout to a smaller value say 10s. Default value of this parameter is 60s. This config tells spark that it should release the executor only when it is idle for this much time.
Check spark.dynamicAllocation.initialExecutors/spark.dynamicAllocation.minExecutors. Set these to a small number - say 1/2. The spark application will never downscale below this number unless the SparkSession is closed.
Once you set these two configs, your application should release the extra executors once they are idle for 10 seconds.
Yes the resources are allocated until the SparkSession is active. To handle this better you can use dynamic allocation.
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-dynamic-allocation.html
For educational purposes mostly, I was trying to get Yarn + Spark to re-attempt my Spark job on purpose (i.e. fail it, and see it be rescheduled by yarn in another app-attempt).
Various failures seem to cause a Spark job to be re-run; I know I have seen this numerous times. I'm having trouble simulating it though.
I have tried forcefully stopping the streaming context and calling System.exit(-1) and neither achieved the desired affect.
After lots of playing with this, I have seen that Spark + YARN do not play well together with exit codes (at least not for MapR 5.2.1's versions), but I don't think it's MapR-specific.
Sometimes a spark program will throw an exception and die, and it reports SUCCESS to YARN (or YARN gets SUCCESS somehow), so there are no reattempts.
Doing System.exit(-1) does not provide any more stable results, sometimes it can be SUCCESS or FAILURE even when the same code is repeated.
Interestingly, getting a reference to the main thread of the driver and killing it does seem to force a re-attempt; but that is very dirty and requires use of a deprecated function on the thread class.
We are having a quite complex application that runs on Spark Standalone.
In some cases the tasks from one of the workers blocks randomly for an infinite amount of time in the RUNNING state.
Extra info:
there aren't any errors in the logs
ran with logger in debug and i didn't saw any relevant messages (i see when the tasks starts but then there is not activity for it)
the jobs are working ok if i have just only 1 worker
the same job may execute the second time without any issues, in a proper amount of time
i don't have any really big partitions that could cause delays for some of the tasks.
in spark 2.0 i've moved from RDD to Datasets and i have the same issue
in spark 1.4 i was able to overcome the issue by turning on speculation, but in spark 2.0 the blocking tasks are from different workers (while in 1.4 i have blocking tasks on only 1 worker) so speculation isn't fixing my issue.
i have the issue on more environments so i don't think it's hardware related.
Did anyone experienced something similar? Any suggestions on how could i identify the issue?
Thanks a lot!
Later Edit: I think i'm facing the same issue described here: Spark Indefinite Waiting with "Asked to send map output locations for shuffle" and here: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-stalling-during-shuffle-maybe-a-memory-issue-td6067.html but both are without a working solution.
The last thing in the log repeated infinitely is: [dispatcher-event-loop-18] DEBUG org.apache.spark.scheduler.TaskSchedulerImpl - parentName: , name: TaskSet_2, runningTasks: 6
The issue was fixed for me by allocating just one core per executor. If I have executors with more then 1 core the issue appears again. I didn't yet understood why is this happening but for the ones having similar issue they can try this.